ESXi 6.7 Storage Vmotion Mellanox performance degradation


Update 2.0.

I have this issue:

The Mellanox ConnectX-4/ConnectX-5 native ESXi driver might exhibit performance degradation when its Default Queue Receive Side Scaling (DRSS) feature is turned on

Receive Side Scaling (RSS) technology distributes incoming network traffic across several hardware-based receive queues, allowing inbound traffic to be processed by multiple CPUs. In Default Queue Receive Side Scaling (DRSS) mode, the entire device is in RSS mode. The driver presents a single logical queue to OS and is backed by several hardware queues.

The native nmlx5_core driver for the Mellanox ConnectX-4 and ConnectX-5 adapter cards enables the DRSS functionality by default. While DRSS helps to improve performance for many workloads, it could lead to possible performance degradation with certain multi-VM and multi-vCPU workloads.

Workaround: If significant performance degradation is observed, you can disable the DRSS functionality.

  1. Run the esxcli system module parameters set -m nmlx5_core -p DRSS=0 RSS=0 command.
  2. Reboot the host.

Disable RSS did not help much. All my 6.5 hosts work fine.

When using storage vmotion between my Netapps via NFS, speed goes down from 3Gbit/s to around 10mb/s

Is there any way to track the issue if its already known since release ?

0 Kudos
0 Replies