VMware Cloud Community
virtualinstall
Enthusiast
Enthusiast

iSCSI Port Binding Question

Hi,

Could do with a little help getting my head round the following:

I had a problem with a newly configured ESXi 5.1 host, at reboot it hung on vmw_satp_lsi successfully loaded.  I linked this to http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=201708... and the fact I had iSCSI port binding in place, but I’m not so sure.  Host and SAN configured as follows:

ESXi host:

vSwitch1

vmk1 IP: 192.168.132.105              (Port Binding vmnic 1)

vmk2 IP: 192.168.133.105              (Port Binding vmnic 2)

Storage array:

Controller 0/1: 192.168.130.100

Controller 0/2: 192.168.131.100

Controller 0/3: 192.168.132.100

Controller 0/4: 192.168.133.100

Controller 1/1: 192.168.130.101

Controller 1/1: 192.168.131.101

Controller 1/1: 192.168.132.101

Controller 1/1: 192.168.133.101

The Dell MD3220i documentation is what I’m finding a little unclear regarding this, as below:

"In a configuration assign one VMkernel port for each physical NIC in the system. So if there are 3 NICs, assign 3 VMkernel Ports. This is referred to in VMware’s iSCSI SAN Configuration Guide as 1:1 port binding.

- Note: Port binding requires that all target ports of the storage array must reside on the same broadcast domain as the VMkernel ports because routing is not supported with port binding. See VMware KB #2017084 here. "

So the above advises on 1:1 port binding for multiple NICs then notes that they must be on the same broadcast domain?

Any help on understanding this appreciated.

Thanks

Tags (2)
Reply
0 Kudos
24 Replies
Gkeerthy
Expert
Expert

your current config is ok, no need to worry too much, for each subnet just use one pNIC, that is what the DELL paper says, and that is what i also mentioned in my first post, an one to one mapping is needed for the best practice. But there is no issue, if you have only 2 pNICS and you use only 2 portgroups. As of now any way you will have 4 paths, each portgroups can see 2 paths.

So far it is ok, no issues.

The whole poing of the vlans/subnets in IP based san is to tag the packets, and it should not be routable. Because the routing will introduce the delays and may cause SCSI time out. So the basic point is, all the storage ips should be pingable from the esxi.

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)
Reply
0 Kudos
kermic
Expert
Expert

OK, looks like you have multiple scenarios available. The multi-subnet one contradicts with what VMware supports and recommends for port binding however it should be there for a reason and that is probably the array architecture.

Can you get someone from Dell side (i.e. engineer from a partner company that sold you the MD3200 box) who could explain the consequences of both scenarios - multiple vs one subnet?

If you're on your own and the array is not in production yet, then test them both and do a benchmark for performance, try unplugging the cables / powering off switches to test failovers. That should give you enough information to base your decision on.

WBR

Imants

Reply
0 Kudos
virtualinstall
Enthusiast
Enthusiast

Thanks to you both for helping me get my head round this, much apreciated Smiley Happy

Reply
0 Kudos
virtualinstall
Enthusiast
Enthusiast

Just to add a conclusion to this:

I configured a host trying both options for iSCSI seperately, different then same subnets, both returning long boot and rescan of HBA times.  Our exisiting two hosts had been running fine for some time but they hadn't been rebooted in a while but couldn't recall long boot times.  I tried a rescan of the HBAs on the working hosts and they too know had this issue!  Eventually found this article http://kb.vmware.com/kb/1016106 referring to RDMs and MSCS which was set up in our environment earlier this month.  I perenially reservered the LUNS used in the MSCS cluster on all hosts as per the article and everything is now good.

Thanks again for your help.

Reply
0 Kudos
scerazy
Enthusiast
Enthusiast

virtualinstall wrote:

Thanks jsut re-reading them all again as I'm starting to get a better understaning, one point in the Dell Deployment Guide is

"Configuring hardware initiators from within VMware® ESXi5.0 Server™ is not supported on the MD platform"

Another point I've just read from http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102564... is

For a message posted in Feb 2013 that was a total rubbish!

The issue with BCM drivers got fixed in June 2012 - Broadcom iSCSI Adapter - BCM 5709 not working with iSCSI

Ofcourse MD units work fine with hardware initiator

Reply
0 Kudos