VMware Cloud Community
aaronwsmith
Enthusiast
Enthusiast

ESXi 6.0U3: Intel 82599 10 Gigabit Dual Port (X520) - Rx Maximums and MTU Sizes

Question: For the ixgbe driver, why would the Rx Max change from 4096 to 240 when MTU is changes from 1500 to 9000?  Wouldn't that make the Rx buffer ineffective to handle Jumbo Frames?

We have two clusters with the same hardware specs, built with the same ESXi image, same ixgbe async driver, and the Intel 82599 10 Gbps NIC has the same firmware version installed.  But in one cluster, ethtool indicates the Rx maximum cannot be set above 240, and the other indicates 4096.  I found the reason is the MTU size.  On one host where Rx Max == 4096, MTU == 1500.  Whereas on the other host where Rx Max == 240, MTU == 9000.

** Example host where I get option to set Rx maximum to 4096:

[~] ethtool -g vmnic0

Ring parameters for vmnic0:

Pre-set maximums:

RX:             4096

RX Mini:        0

RX Jumbo:       0

TX:             4096

Current hardware settings:

RX:             240

RX Mini:        0

RX Jumbo:       0

TX:             1024

[~] ethtool -i vmnic0

river: ixgbe

ersion: 4.5.1-iov

irmware-version: 0x800007f4, 17.5.10

us-info: 0000:01:00.0

[~] esxcli network nic list

Name    PCI Device    Driver  Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description

------  ------------  ------  ------------  -----------  -----  ------  -----------------  ----  ------------------------------------------------------

vmnic0  0000:01:00.0  ixgbe   Up            Up           10000  Full    ec:f4:bb:xx:xx:xx  1500  Intel(R) 82599 10 Gigabit Dual Port Network Connection

vmnic1  0000:01:00.1  ixgbe   Up            Up           10000  Full    ec:f4:bb:xx:xx:xx  1500  Intel(R) 82599 10 Gigabit Dual Port Network Connection

[~] vmware -l

VMware ESXi 6.0.0 Update 3

** Example host where I don't seem to have the ability to set Rx maximum above 240:

[~] ethtool -g vmnic0

Ring parameters for vmnic0:

Pre-set maximums:

RX:             240

RX Mini:        0

RX Jumbo:       0

TX:             4096

Current hardware settings:

RX:             240

RX Mini:        0

RX Jumbo:       0

TX:             1024

[~] ethtool -i vmnic0

driver: ixgbe

version: 4.5.1-iov

firmware-version: 0x800007f4, 17.5.10

bus-info: 0000:01:00.0

[~] esxcli network nic list

Name    PCI Device    Driver  Admin Status  Link Status  Speed  Duplex  MAC Address         MTU  Description

------  ------------  ------  ------------  -----------  -----  ------  -----------------  ----  ------------------------------------------------------

vmnic0  0000:01:00.0  ixgbe   Up            Up           10000  Full    ec:f4:bb:xx:xx:xx  9000  Intel(R) 82599 10 Gigabit Dual Port Network Connection

vmnic1  0000:01:00.1  ixgbe   Up            Up           10000  Full    ec:f4:bb:xx:xx:xx  9000  Intel(R) 82599 10 Gigabit Dual Port Network Connection

[~] vmware -l

VMware ESXi 6.0.0 Update 3

On the host where Rx Max == 240, if I changed the MTU from 9000 --> 1500, the Rx Max changes from 240 --> 4096.  Just an FYI as it took me a while to figure out this behavior.

But I'm unsure then how the Rx ring buffer is useful for Jumbo Frames at 240 bytes?

0 Kudos
3 Replies
kkoekkoek
Contributor
Contributor

I have this exact issue with an X520 on esx 6.5U1 which seems to poorly effect my nfs datastores. Potentially poor NFS Read I/O performance with 10GbE vmnics (2120163) | VMware KB and doesn't allow me to implement this solution.

0 Kudos
mhampto
VMware Employee
VMware Employee

Edit: After talking with an engineer, the recommendation was to open a Support Request if you are able to so we can loop in Intel if needed.

It is odd that that changes, though the current settings are 240 with either 1500 or 9000 set. Have you tested each configuration to see if there is a performance difference?

0 Kudos
admin
Immortal
Immortal

Hello , this is a ixgbe driver issue

There is no fix on 6.0 version , please try 6.5u1

# esxcli system module set -m ixgben -e true

# esxcli system module parameters set -p "RxDesc=4096,4096,4096,4096" -m ixgben  ( if you have 4 network card )

then reboot

https://access.redhat.com/solutions/2451601

0 Kudos