VMware Cloud Community
dsolis
Contributor
Contributor

ESX 3.5 and Dell MD 3000i Latency

I am experiencing latency issue with my new VM installs and Dell MD 3000i. We have two Dell 2950's with the ESX 3.5 installed on them. We have four VM servers installed on the two 2950's. We point the VM servers to the Dell MD 3000i for storage of the VM server files. All these VM's are with just regular windows 2003 servers basically at this point doing nothing. The issue is when I am running any of the VM servers from either 2950 physical server off the MD 3000i, the VM server becomes unresponsive for periods of time. I have done constant pings to each of the VM's and I notice packet lost also during random periods. I have even shut all VM's except one and still see the same issue. I can ping the iSCSI ports IP addresses with no packet lost. The iSCSI ports and the VM kernel ports are isolated onto their own switch. No other devices are on it. We have tried different port on the switch and have also tried different NIC ports on each 2950 for both the VM kernel, service console and the VM network traffic with no luck. We have switched out cables and no luck again. I have tried another configuration where I relocated the VM server files off the MD 3000i and onto the 2950 hard drives. Then boot the VM server back up and it works just fine with no packet loss. I then relocate the files back onto the MD 3000i and see the latency and packet loss issue again. I have done three fresh installs of the ESX on the 2950's to see if that would resolve the issue and still no luck. Has anyone had the same issue? Dell seems to have me running around in circles. Could it be the MD 3000i is not able to keep up? I find that hard to understand even with one VM running. The configuration on the MD 3000i is Raid 5 with 14 SAS 146 gig SAS drives. I have configured the full chassis to be seen as one 2TB drive array. Any ideas?

0 Kudos
4 Replies
whitfill
Contributor
Contributor

I'm a little new to this stuff myself, but I think you have 2 controllers with 4 ports, yet you are using 14 drives in a raid 5 array. I read somewhere that that is WAY to many drives to have in Raid 5 config. They said performance actually degrades aaroun 5-7 disks. The parity that has to be maintained across all those disks is probably taking too long to write. Try breaking your array into smaller chunks. Maybe make the a few sets of mirrored luns for the operating systems and break up the servers on that and make the radi 5 your data disks. make a couple of raid4 data disks with less drives.

0 Kudos
planetelex
Contributor
Contributor

Hi, I'm currently having very similar issues with the same hardware. Did you ever fix this problem? Thanks

0 Kudos
jpuskar
Contributor
Contributor

Same problem here, but we split the MD3000i into 4 datastores.

You'd think with 450Gb 15x SAS drives in Raid 10 the performance would be amazing.

0 Kudos
JohnADCO
Expert
Expert

I use my MD3000i's with 14 drives in a single RAID 5 disk group. I don't experience the issue. 3 hosts, 24 VM's per host.

One thing I noticed with the MD3000i is runs much better with more defined luns, regardless of the disk group configuration. I mean if we even put two real heavy VM's on the same LUN they kind of sucked.

But with only 1 VM running as described, I don't see how that could be a factor.

jpuskar? Can you describe exactly what is happening?

I remain impressed with the performance we get off the lowly MD3000i's.

0 Kudos