VMware Cloud Community
Dongjianhua
Enthusiast
Enthusiast

Regarding ISCSI multipath

Hi,

I know port binding is a way to implement ISCSI multipath solution. But it has some limitation.

When you use port binding for multipathing, follow these guidelines:

iSCSI ports of the array target must reside in the same broadcast domain and IP subnet as the VMkernel adapters.

All VMkernel adapters used for iSCSI port binding must reside in the same broadcast domain and IP subnet.

All VMkernel adapters used for iSCSI connectivity must reside in the same virtual switch.

Port binding does not support network routing.

Do not use port binding when any of the following conditions exist:
Array target iSCSI ports are in a different broadcast domain and IP subnet.
VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP subnets, or use different virtual switches.

Routing is required to reach the iSCSI array.

My question is, if one of the following condition is the case, how to implement the multipath ?

Array target iSCSI ports are in a different broadcast domain and IP subnet.

VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP subnets, or use different virtual switches.

Routing is required to reach the iSCSI array.


9 Replies
Dongjianhua
Enthusiast
Enthusiast

???

Reply
0 Kudos
PCTechStream
Hot Shot
Hot Shot

I got this info from one KB & I wanted to share it with you and anyone else who comes across this quest.  "Port binding requires that all target ports of the storage array must reside on the same broadcast domain as the vmkernel ports because routing is not supported with port binding.

Consider this sample iSCSI configuration, assuming the standard class-C netmask (255.255.255.0):

ESXi/ESX host:

vmk1 IP – 10.10.37.1

vmk2 IP – 10.10.38.1

vmk3 IP – 10.10.37.2

vmk4 IP – 10.10.38.2

Storage arrays:

SAN A IP – 10.10.37.30

SAN B IP – 10.10.38.30

When port binding is used, the software iSCSI stack asks all VMkernel ports to log in to all available targets on the storage arrays. This results in vmkernel ports vmk1 (10.10.37.1) and vmk3 (10.10.37.2) attempting to establish communication with the SAN B IP address (10.10.38.30), and vice versa."

Raul.

VMware VDI Administrator

http://ITCloudStream.com/

www.ITSA.Cloud
Reply
0 Kudos
Dongjianhua
Enthusiast
Enthusiast

Thanks Raul. But what is the solution for such situation ? Is path failover possible ?

Reply
0 Kudos
PCTechStream
Hot Shot
Hot Shot

Don't rely on my word! It’s interesting that VMkernel Binding is specifically called out as being unsupported for what you need to do & I don't think that path fail-over is possible.

According to your plan setup:

- Array target iSCSI ports are in a different broadcast domain and IP subnet. (possibly not support it according to VMware)

- VMkernel adapters used for iSCSI connectivity exist in different broadcast domains, IP subnets, or use different virtual switches. (possibly not support it according to VMware)

- Routing is required to reach the iSCSI array. (possibly not support it according to VMware)

At this point the best people to ask about path fail-over/ISCSI multi-path are:

1 The storage vendor best practices for provisioning and presenting LUNs to initiators

2. Calling VMware Tech Support

Take a look at this article maybe this can help:

When To Use Multiple Subnet iSCSI Network Design - Wahl Network

Raul.

VMware VDI Administrator

http://ITCloudStream.com/

www.ITSA.Cloud
Eric_Allione
Enthusiast
Enthusiast

The best guide I've seen on iSCSI multipathing is from Keith Barker's VCP6-DCV course on CBT Nuggets. Here are some of his points from it which are for vSphere 6.0:

To begin you would have two VMkernel ports for storage traffic. The virtual switch would have two vmnics on it under the section of Active adapters.  Then edit one of the VMkernel ports by going to teaming and failover, click override, and then put the vmnic you don't want in "Unused Adapters". The other vmnic stays in Active Adapters. On the other VMkernel port, you put the one that was in the Unused Adapters into the Active Adapters section. This way they have the opposite configurations. So if on one VMkernel port vmnic1 is in Active Adapters and vmnic2 is in Unused Adapters, then on the other VMkernel port vmnic1 will be in Unused Adapters and vmnic2 will be in Active Adapters. The end result is that each VMkernel port only uses one of the uplinks rather than both of them. If you then click on either VMkernel port it will show you exactly which path it is using.

The last step is to go to Manage > Storage and then select the storage adapter, and then the Network Port Binding tab. This last step is necessary to get the iSCSI multipathing working. Hit the green plus sign, then add a check into the box for the VMkernel ports you just created and set up. Then finally rescan for all storage adapters.

As Raul said, the need for routing and the existence of your network storage being in its own VLAN is the industry standard. That means they're not complications for the basic iSCSI multipathing model.

You might also want to check the vSphere 6.5 Documentation for: ESXi and vCenter 6.5 Documentation > vSphere Storage > Understanding Multipathing and Failover (https://pubs.vmware.com/vsphere-65/index.jsp).

Dongjianhua
Enthusiast
Enthusiast

Please see the attachment. In this solution(something like FC multipath), how to deal with the host adapter redundance ?  How to bind the Iscsi initiator with the vmkernels ? I believe ESXi only supports one software Iscsi initiator.

Reply
0 Kudos
Dongjianhua
Enthusiast
Enthusiast

Above question ?

Reply
0 Kudos
TechMassey
Hot Shot
Hot Shot

I'll just add to the great responses above. Regarding the last comment from the OP. The iSCSI software initiator is a software device which can be attached to multiple interfaces. You are correct that only one iSCSI software initiator can exist on each host.

This might help explain the order.

Physical interface is attached to VMkernel Port Group and then attached to the iSCSI software initiator. You can do this more than once, meaning multiple interfaces on a single iSCSI software initiator. It would be best to contact VMware support regarding the details of those steps.


Please help out! If you find this post helpful and/or the correct answer. Mark it! It helps recgonize contributions to the VMTN community and well me too :slightly_smiling_face:
FreddyFredFred
Hot Shot
Hot Shot

When you have 2 subnets for iscsi, create one port group with 1 vmkernel port and 1 physical nic for 1 of the subnets. Create a 2nd switch with another vmkernel port and physical nic for the 2nd subnet.

On the software iscsi initiator, add the IP addresses of your iSCSI storage in the discovery tab. Do NOT add any of the vmk ports to the port binding section. In esxtop you should see traffic coming out both vmkernel ports and both nics.

In ESXi 6.5 there's a new option to allow routing for iscsi if that helps. See here: Best Practices for Configuring Networking with Software iSCSI