VMware Cloud Community
ArrowSIVAC
Enthusiast
Enthusiast

iSCSI - DVSwitch LACP

I have been researching design.  I need to have a single DVswitch across the environment, (need single managment control plane for VLAN creation / migration etc..).

I have moved all my hosts to 2 x 10Gb per host.  I have SOME need for 10Gb.  I use FCOE or FC for primary block, but iSCSI for migrations and copy data.

I tried to follow various guides to get iSCSI software initiator to bind to dedicated VMKernel for iSCSI only.  THis connects to what is now a LACP uplink to our Brocade VDX fabric.

Guides and document I tried to follow:

http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-bind...

VMware KB: Enhanced LACP Support on a vSphere 5.5 Distributed Switch

(Diagram below)

pastedImage_0.png

Create LACP bond

pastedImage_2.png

Next step would be to bind the VMkernel interface created and then bound to new logical LACP NIC.

pastedImage_1.png

This article implies that if I would NOT do LACP, but set seperate 10Gb to each port group within DV switch I could fix the issue.. but it did change that it would not allow me to bind iSCSI when more then one uplink in single DV Switch

VMware KB: Considerations for using software iSCSI port binding in ESX/ESXi

I would prefer to have a single logical 20Gb NIC and bind iSCSI to that with dedicated VMK

1) Can I bind to LACP within single DV switch and get iSCSI to bind to VMkernel?

2) If I can't get the above, what step am I doing wrong to get two uplinks within single DV Switch with ISCSI bound to each uplink / nic port

Any other links / guides that consolidate the process and provide vSphere 6 focus instructions.  The issue is that there is a lot of banter, but much is old and so lots of notes about it "should work" but trying to get it to work has not been very easy.

Thanks,

0 Kudos
3 Replies
markzz
Enthusiast
Enthusiast

On reading your question my first though was "why do this at all as your SAN is likely taking care of the migrations, clones etc."..

I don't recall of hand the technology (or acronym) but most modern SAN's will support off loading the data migration tasks.. This is a procedure where ESXi instructs the SAN to perform the Clone, migration task and only registers the change in vCenter.

Therefore there is little or no load placed on the FC or host during this function.. OH I believe this was introduced in ESXi 5

I think your over complicating things..

The KISS system is an ideal level of complexity to target.

0 Kudos
ArrowSIVAC
Enthusiast
Enthusiast

Because I have testing and controllers on iSCSI>  because I have some customers who bring in data / VMs which are on a system which hosts iSCSI.

Because VMWare / IBM and other vendors have aversion for hardware iSCSI initiators... and as such I am left with trying to build out a design where I have two 10Gb NICs in a single cluster wide DV Switch which needs iSCSI software initiator, but can't see any design to do that..

Goal:

iSCSI Software initiator with two NICS in a server with NSPOF, aka one dvSwitch with two adapters in it.

I can't believe I am the only one to have this need.

0 Kudos
ArrowSIVAC
Enthusiast
Enthusiast

Update to this posting:

After reviewing my posting with some OEM support..what I posted was not clear enough.  The issue is that, if I use a single vSwitch (standard NOT DVSwitch) and add two NICs into it, I could NOT bind the Software iSCSI initiator to that switch.

What I was pointed to was this guide, done by Brocade which was done very well and pointed out the step that was missing:

http://www.brocade.com/content/dam/common/documents/content-types/solution-design-guide/ip-storage-b...

Step 1:

Create two VMKernel interfaces.  Associate them with the vSwitch (standard NOT DVSwitch!)

Set MTU on switch to 9000

Set MTU on VMKernels to 9000

Then .. go back to the old 32bit client and modify the VMKernel interface and set binding so that the first VMKernel interface (VMkernel-iSCSI1 in example below) binds to ONLY active session of VMNIC2.  Do the reverse to bind VMKernel-ISCSI0 to VMNIC3 in my example

pastedImage_1.png

This then enables iSCSI to be bound to the vSwitch

pastedImage_2.png

pastedImage_3.png

*** VS to modify DV switch, or if you move VMkernel intefaces to DV switch you get no option in 32bit client to effect backing physical NIC for vmkernel

pastedImage_4.png

So what I learned:

1) You must use the 32bit client (as far as I can find) to modify the VMKernel interface to have only one uplink active and define all others as "unused"

2) You cannot use DV Switches with iSCSI Software initiator at least from either the 32bit client or web interface as you cannot effect VMKernel binding nics to be only one active interface.

3) LACP aggregation, as noted, was an attempt to get the iSCSI Software initiator to see only "one nic in the switch" and avoid the error about dual nics backing the VMKernel interface.  LACP not only does not avoid this error, but is not best practice to use with ISCSI as it messes with MPIO.

Question:

1)  Is their a place in the web console (vs 32bit client), to modify the VMkernel interface binding port (setting one active and the other to 'unused')?

2) Is their a plan to fix that lack of feature in the web UI?

3) Is their a CLI methodology to enable the use of dVSwitches to enable Software iSCSI initiator ( as the option to effect the VMkernel's acive / unused backing device is NOT present in either the web UI or the 32bit client)?

THanks,

0 Kudos