I am looking for some additional information about software iSCSI and port binding based on the below articles.
Here is my environment. I have two SANs, a Equallogic and a Nimble. They are both on the same subnet. I have a total of 6 physical adapters that I will bind to VMKernel Ports.
My HBA takes a long time to scan, and the question I am left with is, do my issues stem from having 2 VMKernel Ports on 1 virtual switch and the 4 VMKernel Ports on a second vswitch.
seeing you as certified guy here, while being on my hunt for a tweak before pending support ticket gets "resolved", I want to survey forum as well.
I know about this essential same-space port-binding and requirement for exclusive vmnic mapping for each vmkernel PG, but I get sick why we just can't use active/standby PG setup for port-binding. E.g. here, becuase of some other reasons in the network (actually just lab one), I would be just happy to have portbinding utilizing 2 vmkernel PGs, over same vSwitch where vmk0 provides management (need uplink redundancy) and iSCSI over vmnic0 as active, while vmk1 just iSCSI only via vmnic1. Just for that mgmt redundancy reason, I need to have vmnic1 uplink in standby state and not unused state, with vmk0. vCenter just don't support this.
To your best knowledge, is there some tweak that would allow me to ignore this setup and just let the MPIO get built with mixed and strict PGs? I don't care about eventual single path travel in case of vmnic0 failure for both vmk traffic, I accept the temporary risk for whichever reasons the designers don't support this.
Thanks for prompt response.
Answering myself, there is truly nothing like that, just steps to be followed in well-know "Best practices on ISCSI" document placed at storagehub.vmware.com, coupled with official "vSphere storage" doc branch.