Have DL360s with a single two port iSCSI network adapter (transceivers using fiber over Ethernet)
Switches (2x) are HP branded Mellanox switches
Storage is HPE MSA 2060.
Cables are run so that one port of the iSCSI adapter goes to one switch and the other goes to the other switch so if a switch fails we can still operate. The switch is connected to 4 ports on the MSA storage. Each side of the MSA is split into different subnets/VLANS as are the switches. No tagging from vSphere.
For iSCSI with multi-pathing would it be better to have two VMKernal adapter on a single vSwitch or two VMkernal adapters on two different vSwitches? Does it matter? I had a hard time finding a way to configure it so that one NIC was primary and the other was for failover. It just never worked.
I can get it to work with two VMKernals, two vSwitches but I see a lot more paths, duplicate IPs of the storage than I would be expecting. I assumed I'd just see one active connection for each port on the MSA.
Looking for anyone who's done something similar and knows how it's done properly. I've read dozens of articles, watched a ton of videos and read whitepapers. There's a lot of conflicting info on setting this up.
More details are required such as:
Exact type and model of the server to validate if it is supported by VMware, as well as the type and model of the NICs card.
VMware vSphere version and build.
Logging into the ESXI host with a SSH connection run the following commands and post your results:
the following commands to identify the NIC name (vmnicxx) and then identify its firmware and current driver.
esxcli network nic list
esxcli network nic get -n vmnicxx
To list the configured iSCSI targets you can run:
# esxcli iscsi adapter target list
To list the iSCSI targets with IP and port, run this:
# esxcli iscsi adapter target portal list
To list the iSCSI Static targets
# esxcli iscsi adapter discovery statictarget list
Since the iSCSI setup uses two different subnets, I'd go with two vSwitches, each with a single VMkernel port group.
Also note that you must not configure explicit port binding for these VMkernel port groups in the Software iSCSI adapter.
You want a single vmk on each vnic and each mapped to a single adapter to be in use at any given time. You don't want failover of any kind on the vmnic side, you are going to let VMware multipathing find the best route, either vmnic1 or vmnic2 via their vmk ports.
In my case it's a single physical adapter with two ports so still two vNICs. Does that make a difference in your suggestion. I'll review the docs you linked to.
Sounds like I can do one vSwitch with the two VMK in different port groups and then set one NIC as unused or just have two different vswitch each with one VMK.
Another confusing bit of info:
Do not use port binding when any of the following conditions exist:
and then when you look under best practices they say to use port binding.
I would keep it as two separate vswitches. Only do a single vswitch if you are using the same subnet for both vmkernel ports and then you would use the network port binding.
Don't feel bad, I like to never figured this out myself and I have two sites but as luck would have it the storage array at one site has both controllers on the same subnet so I do a single vswitch and network port binding. The other side has controllers on separate subnets so I do two vswitches and no network port binding.
That's my plan right now. Two vSwitches, with a single VMK. Right now since it's not production I can play around with it to see what works and what doesn't. For some reason the way I have it configured now it's showing active(I/O) on two of the ports of the storage but only Active on the other two. Not sure why, I would think it would be active (I/O) on both sides.