VMware Cloud Community
guow03
Contributor
Contributor

mellanox infiniband connectx-5 driver support for esxi 7.0u1

Hi, 

Do we have drive for the following adapter?

0000:2f:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] [vmnic2]

Checked around but don't find any. 

Thanks


Wei 

Reply
0 Kudos
7 Replies
scott28tt
VMware Employee
VMware Employee

@guow03 

Did you check this?

https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=io&productid=42584&releasei...

 


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
guow03
Contributor
Contributor

This is the Ethernet adapter driver. I've already installed, but still I cannot see vmnic2 of the infiniband interface available in the physical nics.

0000:2f:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] [vmnic2]

  • VMware ESXi 7.0 nmlx5_core 4.19.70.1 NIC Driver CD for Mellanox ConnectX-4/5/6 Ethernet Adapters
  • File size: 867.94 KB
  • File type: zip
nmlx5-coreMellanox Technologies ConnectX4 Native Core Driver4.19.70.1-1OEM.700.1.0.15525992MELFriday, December 04, 2020, 19:45:29 -0500
nmlx5-rdmaMellanox Technologies ConnectX-4 Native RDMA Driver4.19.70.1-1OEM.700.1.0.15525992MELFriday, December 04, 2020, 19:45:29 -0500

 

 

 

Reply
0 Kudos
Perttu
Enthusiast
Enthusiast

AFAIK ESXi doesn't support Infiniband PHY as a host device. IB is supported only as a SR-IOV or full pass-through device to a VM. Even if ESXi would support IB as a host device, you couldn't put an IB port as an uplink on a VDS even you can run Internet protocol (IP) on top of the IB fabric. This is because IP and Ethernet or IB belong to different layers and vSphere virtual switches are just Ethernet switches. IB and Ethernet families provide a physical and link layer connectivity, both with their own specification and these are not interoperable, whereas IP belongs to Internet layer and doesn't care what is the underlying medium as long it passes IP packets.  

Luckily all modern Mellanox IB adapters are actually VPI (Virtual Protocol Interconnect) adapter so you can dynamically change ports from IB to Ethernet. So, just change the port type to Ethernet and it should work all right. Just follow the instructions here:

https://docs.mellanox.com/pages/releaseview.action?pageId=15051769

If you want Infiniband connectivity, then SRV-IO is the best option. However there are two specific firmware options that have to be set on Mellanox card in order for that function. We are doing this to mount Lustre filesystem to desktop Linuxes (VDI) to offer scientists a blazingly fast filesystem over an IB fabric. 

 

guow03
Contributor
Contributor

@Perttu What would be the firmware option? I tried to configure that card to enable SR-VIO so that my vm can use infiniband for our gpfs filesystem, but it always says enabled, need reboot. After reboot, it reverse back need to configure again. 

Also I checked that there was a MLNX-OFED ESX Driver for VMware ESX/ESXi 6.0, but this is for 6, no for 7. I am on ESXi 7. Plus it was released 2016.  https://www.mellanox.com/products/adapter-software/vmware/exsi-server

I am not seeing the vmnic2 in the physical nics list, it can be seem on the lspci output, mst status as mt4119_pciconf0, and mlxconfig, but never as an interface. So I think the driver is needed. 

Thanks a lot. 

Reply
0 Kudos
Perttu
Enthusiast
Enthusiast

Sorry, but I don't quite follow you. So, you have been following these instructions?

 https://docs.mellanox.com/pages/releaseview.action?pageId=15053024

I have no experience from ESXi 7.0 yet, but in your case I would first install the ESXi 7 driver referred here:

https://www.mellanox.com/products/ethernet-drivers/vmware/exsi-server

And the accompanied firmware.

 

Reply
0 Kudos
guow03
Contributor
Contributor

Thanks very much for your help.

OK. I think I got things fixed and my VMs get EDR speed with SR-IOV.

My server has two mellanox card, one connectx-5 IB EDR card and one connectX-4 Eth card. 

The esxi7.1 comes with nmlx4_core/rdma/en drivers. I installed the mellanox nmlx5_core/rdma drivers. So that could be some problem with the drivers. I uninstalled the nmlx4 drivers. 

Besides the following in the instruction, 

./mlxconfig -d mt4119_pciconf0 set SRIOV_EN=1 NUM_OF_VFS=8

./mlxconfig -d mt4119_pciconf0 set ADVANCED_PCI_SETTINGS=1

./mlxconfig -d mt4119_pciconf0 set FORCE_ETH_PCI_SUBCLASS=1

I also have to set the following to get the sr-iov work after reboot and the speed right. 

./mlxconfig -d mt4119_pciconf0 set LINK_TYPE_P1=1 KEEP_ETH_LINK_UP_P1=0 KEEP_IB_LINK_UP_P1=1

./mlxlink -d mt4119_pciconf0 -p 1 --speeds EDR # without this, I only got SDR speed.

Hope this will help other users. 

 

dchau3600
Contributor
Contributor

Thank you! I wasn't able to get VFs enabled without the extra settings you mentioned.

Reply
0 Kudos