iSCSI port binding always remains grayed out in UI.
it's just when you do network port binding on your software iscsi adapter using that vmkernel adapter. that's when you will see a tick mark on it.
Hi, there is no issue in your lab environment as this is the normal behavior. Once Software iscsi adapter is mapped with your vmkernel port then automatically it will be selected but again from the vmkernel port properties you cannot select-deselect this.
vmkernel adapter which you want to designate for IP Storage, doesn't support Active-Active or Active-standby NIC teaming.
so all you got to do is, make sure you expose only one VMNIC to VMkernel port which you are created for IP storage on one of the vSwitches.
Let's say in following example, I have picked up my host with virtual switch vSwitch0, having only one VMNIC0, if I click on Properties of this virtual switch and add an additional VMKernel port, I will then be able to use it to do network port binding.
Note: If you have virtual switch with multiple uplink adapters, still you can do it, All you got to do in that case is, after addicting vmkernel adapter for IP Storage, please override nic teaming policy of that port and set only one VMNIC out of those multiple you have as active and push rest of those into unused adapters area.
that way your network port binding won't be troubled.
Let me know if this helps.
The host is a nested ESXi 6.0 server, but the only way this seems possible is adding a standard vSwitch. Looks like dvswitch is not possible as a storage adapter NIC in what I am tring to do (though esxi 6 nested comes with vmware tools)
Thanks!
shouldn't be an issue even if you are in nested environment.
it should work with vSS or vDS, provided that you follow the same thing, have only one active uplink available to this vmkernel port.
Hmmm still doesn't work. Just one NIC mapped to the port....what gives?
Hi
Any news regarding this issue ?
I'm currently having the same issue.
To sum up :
I have 4 exsi6.0 Host which have 4 physical NIC both
2 of the physical NICs are connected to a distributed switch (DS1) - DS1 is composed of nic1 and nic3 of host 1 , host 2 , host 3 and host 4
2 others NICs are in another distributed switch (DS2) - DS2 is composed of nic0 and nic2 of host 1 , host 2 , host 3 and host 4
Each distributed switch have several PortGroup.(PG)
PG11 , PG12 , PG13 are connected to DG1
PG21 , PG22 , PG23 are connected to DG2
PortGroups of Distributed switch 1 have a LAG (LAG1)
PortGroups of Distributed switch 2 have a LAG (LAG2 )
I have created LAG in Active/Active. LACP has been put in place on the switch
I have created a vmkernel for the PortGroup which I want to use to connect a ISCSI device
When I go in the properties of the iSCSI Software Adapter and try to bind a VMkernel Network Adapter, no VMKernel are available
How should I dot ? Is the LAG is problem ?
Thanks in advance
Software iSCSI Port Binding is also contraindicated when LACP or other link aggregation is used on the ESXi host uplinks to the pSwitch.
ref: VMware KB: Considerations for using software iSCSI port binding in ESX/ESXi
in short, LAGs (LACP) don't play well with iSCSI port binding.
- LACP support is not compatible with software iSCSI multipathing. iSCSI multipathing should not be used with normal static etherchannels or with LACP bonds.
ref: VMware KB: Limitations of LACP in VMware vSphere 5.5
Thanks for the reply.
Why "Software iSCSI Port Binding is contraindicated when LACP or other link aggregation is used on the ESXi host uplinks to the pSwitch " ?
Is there a way to add the same physical NIC to a DSwitch and a SSwitch ?
I mean , having a DSwitch with several PGroup using NIC 1 and NIC 3 in LACP
And having the same NICs ( NIC1 and NIC 3 ) in a SSwitch without LACP ? ( Or another DSwitch with the same NIC1 and NIC3 but not LACP )
