VMware Cloud Community
cc1
Enthusiast
Enthusiast

Multipathing issue using multiple subnets and Nimble arrays

We have a requirement to use multiple subnets for multipathing because we also use XenServer. This is the first time we setup multipathing like this and may be missing a step.

Our host use two physical nics for iscsi traffic connected to 2 switches stacked. We typically bind a kernel port to each vmnic for multipathing. This time I have setup each pnic on its own vswitch and binded to a kernel port with iscsi enabled. The array has 5 nics and I set two ports on one subnet and 3 on the other. The Nimble has a "discovery" ip that is virtual which is what we use for dynamic discovery. The switches are setup with their vlans respectively with routing disabled. I can vmkping all ip’s with no problem.

When I rescan the adapter or refresh the storage I only see 3 paths. When I look at the network config on the iscsi software adaptor, it shows the path status as “Not Used” for the vmk port in question.

ScreenShot607.jpg

0 Kudos
5 Replies
Sreec
VMware Employee
VMware Employee

Hi,

    Please do check >http://kb.vmware.com/kb/1003681,http://kb.vmware.com/kb/2038869 and let me know your findings

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 7x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
0 Kudos
Dave_Mishchenko
Immortal
Immortal

Is the path selection for the LUNs set to Nimble_PSP_Directed?

0 Kudos
cc1
Enthusiast
Enthusiast

Apparently, we do not have that plugin in our version. I guess that is in the 2.0 release.

0 Kudos
cc1
Enthusiast
Enthusiast

I have performed all of those suggestions prior to posting except vmkping with the "-d" switch. I am getting the following when using jumbo frames (-s 9000)

what is intersting is that i am getting the following error on all arrays including the existing equallogics.
~ # vmkping 192.168.23.9 -s 9000 -d
PING 192.168.23.9 (192.168.23.9): 9000 data bytes
sendto() failed (Message too long)
sendto() failed (Message too long)
if i chose a smaller size packet it works
~ # vmkping 192.168.23.9 -s 8784 -d
PING 192.168.23.9 (192.168.23.9): 8784 data bytes
8792 bytes from 192.168.23.9: icmp_seq=0 ttl=64 time=0.863 ms
8792 bytes from 192.168.23.9: icmp_seq=1 ttl=64 time=0.887 ms
8792 bytes from 192.168.23.9: icmp_seq=2 ttl=64 time=0.861 ms

without the "-d" it works

~ # vmkping 192.168.23.9 -s 9000

PING 192.168.23.9 (192.168.23.9): 9000 data bytes

9008 bytes from 192.168.23.9: icmp_seq=0 ttl=64 time=0.896 ms

9008 bytes from 192.168.23.9: icmp_seq=1 ttl=64 time=0.855 ms

9008 bytes from 192.168.23.9: icmp_seq=2 ttl=64 time=0.854 ms

the switch ports are set to 9216 and the vnics and vswitch are set to 9000. its almost if jumboframes are not configured properly. I have already rebooted the host just because and plan on rebooting the swithces tonight.

0 Kudos
cc1
Enthusiast
Enthusiast

We just installed the 220G which has 2 active 10g ports and can now see both paths active/active. Not sure why the 1gb model would not balance across all links. Since we only plan to use the 1gb model for DR, we are not going to further troubleshoot this.

0 Kudos