So I have two hosts in the same cluster setup the same exact way.. One host is using both iSCSI vmKernel ports while the other isn't.. Can someone tell me what sets this?
Using software iSCSI to an EMC VNX 5300.
By default, the default iSCSI configuration creates only one path to each iSCSI target. Port binding is used to provide multiple paths to an iSCSI array. With vSphere 5, iSCSI port binding can be configured in the vSphere client as well as the CLI. Prior to ESXi 5, port binding could only be configured using the CLI. There are a few things to bear in mind before configuring port binding:
The first step to configuring software iSCSI is to set up the appropriate network configuration:
The next step is to go into the settings of the new vSwitch and create additional portgroups for each nic you have assigned to the vSwitch. In my lab I used two nics so created two portgroups. My resulting configuration looked like this:
Ah i have alreday see this. I though that you might have been showing something different. I have 4 x 1GB connections coming from EMC CX3-40 ISCSi in to ESXi 5.0, 5.1 and 5.5 hosts (experimenting here) and i cannot get more that 2 x paths to show active. All documentation online shows 1 x path, 2 x path but no 4 x path. Was possibly looking for something in ESXCLI to address volume sessions? I understand that there is a default for 2. Then not sure what to do about round robin.
10.1.2.18, 10.1.3.18, 10.1.4.18, 10.1.5.18, all in same Vlan,
We are seeing an identical issue with 1 host in our Cisco UCS deployment.
Two ESXi 5.1 U3 hosts
Two iSCSI vmk NICs per host
Host 2 is fine.
Host 1 is the "bad" host but everything is configured similar to Host 2.
The only difference I'm seeing is the number of paths and connected targets is different.
Host 2 we're seeing 20 targets and paths (this seems odd since we only have 4 volumes, but it's still the one that's pathing correctly so I'm not going to ask questions)
Host 1 we're seeing 16 targets and paths.
I can vmkping both NICs on host 2 from host 1.
I can vmkping one NIC on host 1 from host 2 the other does not respond.
I can vmkping both NICs on host 1 from host 1.
Things I've tried and have not worked:
I have tried to remove the vmkNIC iSCSI binding on the bad vmkNIC and re-add the binding.
I have tried a different IP address on the bad vmkNIC.
I have actually created a new vmkNIC using the same vmnic as the old vmkNIC using the same and different IP.
No matter what I do I get the same results seen in the photos at the start and below.
Again only a single host in our UCS deployment has this issue, and it's using the same UCS template from what I can tell across the board.
VLAN is correct, etc...
Good host 2, Bad host 1.
As you can see active/unused is setup correctly, this is of course flipped on the other vmk for iSCSI.
UCS traffic on the vmnics used for iSCSI A and B legs. Obvious one isn't being used.
No round robin, our array has it's own pathing driver which looks correct. I think we've actually whittled it down to either A. a problem with the iSCSI initiator in VMware or B. our UCS config since the only hosts seeing any issues are UCS. All non-UCS hosts seem to have no issues.
Unfortunately we can't reboot these for awhile since the hosts having the issue are Cisco UC for the company which will require some scheduled down time. We have Cisco going over our UCS config to make sure there's nothing awry there first, and if not then we're going to reboot the hosts.
Did you ever find a resolution to this? We are experiencing the exact same issue between our UCS and VNX 5400 in both of our data centers. We are on ESXi 5.5 U2. We have the issue when using a VM Fex VDS, a VMware VDS, or a VMware vSwitch. We have a mixture of issues as not every host has the exact same issue. Most of the hosts do not show all of the Port Groups being used, but some show all of them used, but the number of Connected Targets do not correlate that.