to make life simplier, why not just assigned all to 10.10.100.x instead of messy around. with that in place the design would be very clean and clear cut in such that .100 are meant for iscsi traffic only. each host would need their own IP address. you might want to look into this: http://www.yellow-bricks.com/2009/03/18/iscsi-multipathing-with-esxcliexploring-the-next-version-of-esx/
to make life simplier, why not just assigned all to 10.10.100.x instead of messy around.
I agree that it would be much more "clean", but I did understand the SAN manufacturer recommended the approach with multiple subnets, probably to spread traffic better.
Yes, as ricnob states the manufacturer says in its implementation guide that in order to utilize all uplink ports to the SAN, spearate subnets must be used for them. they publish an ESX implementation guide that walks through configuring the KMKernel ports I see, and they are all on separate subnets. So I'm assuming I need to ensure routing is configured for these subnets on the physical switches, but I can't remember if VLAN trunking could also be used to allow the VMKernel ports to talk on all trunked subnets. my basic networking skills are a bit rusty after years of being on server teams.
I'm connecting these ESX hosts to a Dell MD3600i unit with 2 stacked layer 3 switches. So because they are layer 3 I'm sure I could setup the routing for the subnets if I must. I probably just need to go read this documentation more closely tomorrow.
I know this kind of storage very well (note that are LSI base, so HP and IBM have similar solution).
You MUST use two different iSCSI network, and better if are physical separated.
If you have use spare switch (usually Dell PC 54xx) you can simple use one for nets (with NO trunk between them).
If you are using shared switches, put one VLAN on one switch and the other on the second and do not share those VLAN across the switch trunk.
This kind of configuration is the same design of a FC fabric...
Thanks for the help Andre. So then assuming I configure two subnets, 10.10.100.0/24 and 10.10.200.0/24, I presume that I need to setup a static route on the switch from the 100.0 net to the 200.0 net. Is that correct?
The recommendation I have heard is that the SAN traffic be on a seperate switch or at least vlan. Isolated from all other network traffic. I have never heard to isolate individual ports on a SAN. Does your SAN not load balance traffic accross its ports? If it does you may actually lose performance. Instead of being able to load balance over 4 ports you will only get 2 per subnet/vlan if the seperation doesnt cause loss of packets. Just a thought I am not there to see your setup. What type of SAN do you have?
I mean really isolated network. No routing between them.
Is the multipath software that can switch from one path to another, and if need from one network to another.
Note that the PowerVault will have for each processor, one interface to one net and one to the other.
Got ya. now it makes sense. I guess it wasn't dawning on me that the VMware iSCSI init is handling the the MPIO in that way.
It's a Dell MD3600i with dual controllers and 2 10GbE ports on each controller. The Dell documentation states that the two ports on each controller be in separate subnets. Also, these are isolated switches, no other network traffic is going to be competing with the ESX to storage iSCSI traffic.
This is what Dell states in their ESX implementation guide:
Subnets should be allocated by the number of array ports per controller. With the MD36xxi you only get an active path to multiple ports on the same controller if they are on different subnets/vLANs. Since the MD3600i has 2 ports per controller you achieve your best throughput with 2 subnets.
Regardless of the pathing method in VMware, you will only have one active port on each controller of the array if you do not create additional subnets. Without additional subnets this will cause a bottle-neck at the controller since pathing is at the disk group level rather than the LUN level for the MD3xxxi. Configuring RR to the same LUN across both controllers is not recommended.
I have a question here:
MD3600i with dual controllers, connecting to two isolated PowerConnect 54xx switches (tripple fibre tunk between switches)
3 x ESX5i servers with two physical iSCSI Qlogic HBA in each
If considering only that setup I could easily do as per manual with separate subnets
But I also have Eqallogic PS5000E array (also dual controller)
But PS5000E has active-passive controller presenting a SINGLE group IP (and not port IP)
No subnets involved here then
So if I use both arrays (and both I must) I see no way (with my hardware) to do it "right" as per Dell recommendation
(if I have subnets, then I have no redundancy to PS5000E from ESX servers, as only 1 HBA can be on the same subnet as PS5000E)
Anybody having any suggestions?
Scerazy - I had exactly the same issue. Have a number of VMWare ESX hosts with dual 10GbE NIC's, via 2 x PC8024 switches, to an EqualLogic PS6010XV SAN. Just implemented a Powervault MD3600i SAN. Being that the MD3600i requires separate subnets, yet the PS6000XV requires the same subnet, initially stumped me. I ended up creating additional vmk ports on the vSwitch in different subnets then bound those to the s/w initiator - so basically 2 x vmk's per vswitch host which were on the same subnet for EQL, and 2 x vmk's on the same vswitch per host which were on 2 different subnets again for the MD3600i. However this caused some intermittent issues with iSCSI traffic which we're still trying to suss out.
Sure, used two dual BCM5709 PCI-Express adapters (so 4 connections) this way I have load balancing to both arrays
this is the best way so far i know.
Our client didn't want to spend the money on extra HBA's (ie: a mere 6 extra dual port 10GbE cards), so I ended up creating multiple vmk's across the dual 10Gbe pNIC's to both the Dell EqualLogic and Dell MD3600i SAN - however I have found it is *MANDATORY* that you have separate VLAN's if using the same iSCSI switching core for both SAN's. I made the mistake of having a single VLAN for all iSCSI traffic to both SAN's and things turned to absolute poo (mostly due to contention I believe, causing hosts to dropoff VC / management timeouts, vmkernel logs showing because resource taken up by iSCSI continuously trying to talk to LUN's - BTW, I use software iSCSI initiator as only running Enterprise edition). I spoke to Dell enterprise support on how this should be configured and they said they have NO whitepaper or technical documentation on how this should be done - so it ended up being a bit of trial and error over the past few months. Anyways, the optimal solution is this: I created 3 x separate VLAN's / subnets (ie: 1 x VLAN per subnet) I created VLAN11 for the EqualLogic and VLAN36 and VLAN37 for the MD3600i (as requires 2 x different subnets spread across the dual controllers) The ports on the iSCSI switch to the vSwitch on the hosts are trunk ports (VLAN11,36,37) and the ports on the iSCSI switch to the SAN's are access ports (VLAN11 for all EQL ports, VLAN36 to 2 ports over the dual controllers, VLAN37 to the other 2 ports over the dual controllers) On the hosts I have vmk's as: iSCSI1, iSCSI2, iSCSI3, iSCSI4. I have iSCSI1 and iSCSI2 to the EqualLogic (effectively 2:1 binding to 10GbE pNIC1 and pNIC2 respectively) and iSCSI3 and iSCSI4 to the MD3600i (effectively 1:1 binding per subnet to 10GbE pNIC2 and pNIC1 respectively - ie: the reverse of EQL, so under normal circumstances traffic its *mostly* balanced - although we do have EQL PSP installed and are using MRU for MD3600i,.. so,... yeah 'mostly!') However this seems to work well. I suspect the (intermittent iSCSI) issues we had were potentially due to the VMWare iSCSI stack getting confused when sending/receiving iSCSI traffic destined for different SAN's out the same VLAN (or contention issues). Hope this helps someone in the same situation, it caused me a few headaches for a while!