VMware Cloud Community
gerasimatos
Contributor
Contributor

ESXi and Dell MD3000i SAN

We are trying to use iscsi with ESXi and a MD3000i SAN. I previously have only worked with HP storage so please bear with me.

In any case, I got iscsi setup in ESXi and see the devices and paths to the SAN. I have this configured in roundrobin and we have dual GB NICs connecting to the SAN. I added the SAN storage to ESXi as well and migrated a mahcine onto the SAN for testing.

My issues is the performace is TERRIBLE. The machine locks up, performance is slow, it loses connection and is just plain unusable in a production environment.

Can anyone chime in on what I should be looking at? Where to start?

0 Kudos
16 Replies
gerasimatos
Contributor
Contributor

On the SAN I have jumbo frames enabled, and on the Dell 5324 switches as well. Forgot to include that.

0 Kudos
vmroyale
Immortal
Immortal

Hello.

There is a lot of good information in this discussion.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
gerasimatos
Contributor
Contributor

That dsicussion really has nothing to do with this. Sorry.

I ahve everything configured. All LUNS are visible and able to be used, its the PERFORMANCE that is MISERABLE.

0 Kudos
dcoz
Hot Shot
Hot Shot

gerasimatos,

I would have a look at the VC stats for the networking as well as looking at ESXTOP stats.

If you are using ESXi probably best to download VMA and run the command resxtop against the host thats running the VM.

A good doc to read is http://communities.vmware.com/docs/DOC-10352

Also in terms of increasing the available bandwidth that iSCSI can use, have a look at this great blog post.

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

Hope this helps

DC

0 Kudos
AndreTheGiant
Immortal
Immortal

Try to change the policy from round robin to MRU.

Also be sure that the network topology is the one suggested in MD3000i configuration: two isolated networks, each controller with one IP for network, each host with one IP for network.

You must add all the 4 IP targets.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
malaysiavm
Expert
Expert

you need to turn on the iscsi optimization option on the switch, enable the jumbo frame on vswitch, physical switch and also the MD3000i.

Craig

vExpert 2009

Malaysia VMware Communities -

Craig vExpert 2009 & 2010 Netapp NCIE, NCDA 8.0.1 Malaysia VMware Communities - http://www.malaysiavm.com
0 Kudos
gerasimatos
Contributor
Contributor

How can i enable jumbo frames on the vswitch?

0 Kudos
gerasimatos
Contributor
Contributor

I changed from round robin to mru and didnt see any benefit.

Ran esxtop on the host and checked the davg numbers and they seemed very high, greater than 150 on average.

0 Kudos
AndreTheGiant
Immortal
Immortal

Can you give more info on your storage configuration?

How many disks? SATA or SAS?

How many Virtual disks and how many VMs on those LUNs?

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
gerasimatos
Contributor
Contributor

8 disks (1 hot spare) so 8 in use, 15k SAS drives in a R10 config. 1 LUN. 8 VM's total.

0 Kudos
AndreTheGiant
Immortal
Immortal

How can you build a RAID10 configur with 7 disk?

Your disk seems fine, a better solution was to have 2 LUNs on two different controller to maximize the paths (in your case you can use only 2 paths on 4 available).

On your MD3000i do you have some error or warning?

I suggest to call Dell's support to check your speed issue.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
gerasimatos
Contributor
Contributor

RAID 10 (or 10) is a mirrored data set (RAID 1) which is then striped (RAID 0), hence the "10" name.

A RAID 10 array requires a minimum of two drives, but is more commonly implemented with 4 drives or more to take advantage of speed benefits.

0 Kudos
K-MaC
Expert
Expert

Andre is referring to the fact that you need an even number of disks for RAID 10.

Cheers

Kevin

Cheers Kevin
0 Kudos
gerasimatos
Contributor
Contributor

Yeah, 8 disks. 1 hot spare.

0 Kudos
K-MaC
Expert
Expert

I think it was a simple misunderstanding.

8 disks (1 hot spare) could be 8 or 9 disks total.

8 disks (+1 hot spare) to me reads as 9 disks.

It looks like he read it as 8 disks total rather than 9 disks total. Or perhaps I am the one misinterpreting lol Smiley Wink

At any rate good luck with your initial issue. Smiley Happy

Cheers

Kevin

Cheers Kevin
0 Kudos
SomeJoe7777
Enthusiast
Enthusiast

Have you followed the MD3000i setup as detailed in this document from Dell?

This

is my exact setup, using 3 ESXi hosts with vSphere 4.0 Essentials

Plus. I have not have any problems, I have 2 arrays in the MD3000i -

one is 10+1 15K RPM SAS drives - 1 hot spare and 10 in RAID-5, and 5x

10K RPM SAS drives - 5 in RAID-5. I'm running 16 VMs on this unit with

no issues whatsoever.

However, I am not running jumbo frames (my

switches don't support it on a per-VLAN basis, and I didn't want to

enable jumbo frames on the whole switch). That document gives you two

paths to follow - one if using jumbo frames, and another if you

aren't. Obviously I followed the one using standard sized frames.

Follow

Dell's instructions for setup exactly, paying particular attention to

the subnetting. Basically, everything connected to switch A is on one

subnet (ESX host port A, switch A, MD3000i controller 0 port A and

controller 1 port A), and everything connected to switch B is on a

second subnet (ESX host port B, switch B, MD3000i controller 0 port B

and controller 1 port B). The switches are NOT connected together in

any way.

There is also a thread there at the bottom of that

document that deals with Round Robin + Jumbo Frames issues. It did not

have a resolution in it from what I could tell.

0 Kudos