VMware Cloud Community
kcucadmin
Enthusiast
Enthusiast

ESXi 5 dvswitch iSCSI Software Adapter and MPIO

I am in the process of migrating from standard switches to dvswitches utilizing 2 10gig uplinks into 2 Nexus 5548's.  During my research i kept not finding information about how to configure the Software ISCSI Adapter and MPIO

I would like the uplink adapters 10gig nics to carry VM Network, VMotion (VLAN 90), MGMT (VLAN 60), iSCSI (VLAN 70)

I have come up with a "theroycrafted" config and would like some validation.

I have a dvswitch with 2 uplink adapters configured as static port binding vlan trunking

i have created 2 dvportgroups for iSCSI (VLAN 70), and have set the teaming/failover so that each portgroup has only a single active uplink, and the other is set to unused.  For this config both dvportgroups are in the same VLAN 70

i have created dvportgroups for VM Network, VMotion (VLAN 90), DMZ (VLAN 19), and MGMT

at the host i have added 2 Virtual Adapters and assinged each Virtual Adapter to the iSCSI dvPortGroups.

i then went to the iscsi software adapter, network configuration, and added the 2 virtual adapters.

this should give me Mutli Pathing on the iSCSI Software adapter through the dvswitch.

I added additonal Virtual Adapters for each of the Other dvportgroups and setup the correct VLAN IP information

am i on the right path here?

0 Kudos
6 Replies
rickardnobel
Champion
Champion

Robert Samples wrote:

am i on the right path here?

It seems to be a working configuration. Do you have the iSCSI SAN up and running at the moment and can verify that you can access it / fail over?

My VMware blog: www.rickardnobel.se
0 Kudos
kcucadmin
Enthusiast
Enthusiast

im waiting on hardware still.

i have two HP procurve 5304xl switches but not sure if they will really work with a dvswitch uplink trunk.  i dont think they support LLDP.  but i'll research it.  they are 5 year old switches.  Also, not really setup for VLAN's at the moment, but i could prob strip the VLAN settings out... All i'm really after to testing dvportgroup binding to iscsi software adapter.  I have it configured now and it says the binding is "Compliant" in green, so i assume we are good to go.

i guess i could plug both uplinks into a single switch and at least test connectivity.

0 Kudos
rickardnobel
Champion
Champion

Robert Samples wrote:

i have two HP procurve 5304xl switches but not sure if they will really work with a dvswitch uplink trunk.  i dont think they support LLDP.  but i'll research it.  they are 5 year old switches. 

I have worked some with the 5304xl series and even if it is a few years since I remember that they do support LLDP.

My VMware blog: www.rickardnobel.se
0 Kudos
kcucadmin
Enthusiast
Enthusiast

They support LLDP, but i dont think they support cross switch trunks...  i.e. each nic would need to be plugged into the same switch.

0 Kudos
rickardnobel
Champion
Champion

Robert Samples wrote:

They support LLDP, but i dont think they support cross switch trunks...  i.e. each nic would need to be plugged into the same switch.

Yes, there is no support for cross switch trunk (not related to LLDP), but with MPIO from ESXi you should not set up the switch ports as link aggregation trunks. So there should not be a problem with the HP 5304, just setup the iSCSI VLAN as tagged ports on both switches.

My VMware blog: www.rickardnobel.se
0 Kudos
kcucadmin
Enthusiast
Enthusiast

rick,

the problem is not with my current 1gig iscsi model as i have 8 seperate 1gig nics.

the problem will be going forward, where i only have 1 2 port 10 gig nic.  1 port per switch for failover.

the HP's are going away.  i know how to make it work in "TODAYs" hardware, i have 2 nexus 5548 w/2248 FAB extender.  i'm trying to figure out how the New model would look.

i will be consolidating ALL traffic loads to the 10gig nics.  i did not want to leave any 1gig connectivity left as i only have a single 2248 fabric extender, it will be multi homed so i should still have "Switch Failure" protection.  My thought right now is, iLO, legacy 10/100/1000 equipment, and possible 1 1gig uplink from each esx host.  for mgmt/console traffic.

so that means ALL the dvportgroups will need to ride over two DVUpLinks, I want the VM Network, iSCSI, VMotion traffic to all flow over the emulex card, which will be uplinked into 2 seperate nexus 5548.  I belive we will create a vpc, so to ESX it will appear as being plugged into the same switch.  I'm just wondering what specific settings on the "Uplinking" should be configured to support iSCSI mpio.  if the defualt settings are fine, that's great, very easy for me.  since i'm not finding in configuration examples out there, im assuming, nobody does it this way, or the defaults are fine.

do you see how i have MPIO questions now?

when i say MPIO, i dont really need the "Throughput" of 2 paths, just want to protect from a switch failure or reboot.  if all your vms are on remote storage and that iscsi link fails, bad things happen.

the bottom line is budget.  in a perfect world i would have 2 emulex cards per host, and i would have 2 2248 fabric extenders, but it would add another 20k to the overall cost of the project.

i added a PDF to the original post that shows what i'm trying to do.

0 Kudos