VMware Cloud Community
hattrick4467
Contributor
Contributor

Help Needed with iSCSI and nic use

We currently have two VMware 5.0 hosts connected to two dedicated Dell 5524 switches. We currently have a Dell Equallogic 400XV SAN. The two Dell 5524 switches are connected together with stacking cables so they are seen as one switch. We currently have two NIC cards in each server connected to the iSCSI switches. Currently all devices on this iSCSI network or on the same subnet with each NIC on the hosts having their own IP. We currently use these NIC's in a team and load balanced using Route based on Port ID.  It seems to work OK but when we were working with VMware support on a seperate issue with ghosted datastores due to a v4 to v5 upgrade issue they told me that it would be better to have the NIC's setup to use IP hash as long as the switches supported it.  The config is shown here:

vmnicconfig.png

vmteaming.png

I should have also mentioned that Dell Equallogic SAN has two controller ports that are 1GB connections for each. These are both connected to the 5524 Dell switches with one port going to each switch for redundancy.

So here is where my question comes in. According to the Dell Equllogic Documentation they that your iSCSI network should be a single IP network. I have read elsewhere that it is how the box has individual IP's for each controller but a virtual IP for the unit itself. In this docuementation it says that if your switches support stacking that you should use this over LAG ports. On some other Dell iSCSI devices ond other SAN's they recommend not stacking the switches and creating two seperate IP networks and connect each host to both and connect each SAN port to both( or in some cases more ports).

What I would like to know is if I am needing to keep my switches stacked and on a single IP scheme what is the best way to set them up so that they give the best throughput and also provide redundancy. it seems like I would want an active/active config for the iSCSI properties rather than active/standby as I have seen in a few examples. I am leaning towards leaving the switches stacked and creating LAG ports across the physical switches(within the stack). I would have a port group for host1 and a port group for host2. If any one switch goes down the other switch would contnue to carry the traffic.

I really wish I would have configured and tested this prior to having production servers although I could technically fail them over to a single host. Also are there any good tools out there to verify that my paths and throughput are working as they should?

Thanks for your help.

0 Kudos
1 Reply
AndreTheGiant
Immortal
Immortal

Please refer to the Equallogic documents for vSphere (there is also an off-line copy in the community: http://communities.vmware.com/servlet/JiveServlet/download/1387588-29608/Configuring%20VMware%20vSph... )

Basically you need to make a 1:1 mapping between your vmkernel interfaces and your NICs.

Then add both to the iSCSI adapter (now can be done also from GUI).

And set MTU=9000 to enable Jumbo Frames

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos