VMware Cloud Community
electricd7
Contributor
Contributor

Can I support multiple vmkernel ports on same physical NIC as NVMe-of?

Hello-

I am working on an implementation plan which uses Dell R640 servers that have dual-port 100gb Mellanox adapters installed.  These adapters were selected specifically because they support NVMe-oF.  I am utilizing a PureStorage x70 array which also has 100gb NVMe-oF capable interfaces.  This is all cabled between 2 Cisco Nexus 3k 100gb switches.  What I was hoping to do was to create a single vSwitch on each host that utilizes both mellanox 100GB intefaces as an uplink.  I would use standard active/standby teaming for Management, active/active for VMNetwork, and Active/Not used for A side vmotion and B side vmotion.  I would also be creating 2 vmkernel ports on separate VLANs for RDMA traffic with each vmkernel port only using a single NIC for uplink and the other set as "not-used".

I have this all setup and have added my 2 RDMA software adapters and tied them to the appropriate physical uplink adapter.  When i go to mount my storage, I am able to map to the B side of the PureArray on the appropriate RDMA adapter, but can never get the A side to connect on the other RDMA adapter.  I have opened a case 2 weeks ago with VMware support but thus far they haven't been able to tell me why this doesn't work or that it is not supported to share the uplink for RDMA traffic with other services like we do with iSCSI traffic.

I will say that if I separate the nvme vmkernel ports so that they are on separate vSwitches, each with only 1 physical uplink, I can connect just fine to both sides of the PureStorage array.  This would be fine, except that it would require me to cable separate 10gb ethernet to the Dell servers to handle all other traffice other than NVMe-oF. 

Does anyone have any experience with NVMe-oF yet and can confirm whether it is supported to share the uplinks with other types of traffic?  Thanks!

0 Kudos
2 Replies
Lalegre
Virtuoso
Virtuoso

Hello electricd7​,

Looking to your configuration i think everything you are trying to do is completely supported and using the nice approach. I assume that you have been following the next documentation: https://support.purestorage.com/Solutions/VMware_Platform_Guide/User_Guides_for_VMware_Solutions/Imp...

In the requirements for the Networking Stack inside vSphere it mentioned that you need 2 Portgroups but at the end of the procedure it mentions a new vSwitch so there is an inconsistency there.

Make sure that the VMkernel binding is correctly configured for both targets using one VMkernel on Side A and another VMkernel on Side B: Configure RDMA Network Adapters.

Also i do not know if this is scenario is applicable but in the past i had issues with different storage arrays where they have set a configuration for Active-Standby from their side and having that was blocking the ESXi to have two paths active so maybe there is a configuration in PureStorage where you can review those settings.

0 Kudos
JJBN
Enthusiast
Enthusiast

Hi,

 

we are having the exact same problem. How was it solved?


Thanks!

0 Kudos