VMware Cloud Community
groundsea
Enthusiast
Enthusiast
Jump to solution

Why the rx buf length and vmdq num of intel ixgbe are decreased in vCD(deployed with VxLAN) and VIO (deployed with VDS) scenario?

Hi expert,

We use intel ixgbe NIC in our host, and we have set the option as below to enable 8 VMDQ for each NIC.

esxcfg-module -s "InterruptType=2,2,2,2 VMDQ=8,8,8,8" ixgbe

If the host is not prepared by vCD(use VxLAN) or VIO(use VDS without VxLAN), the physical NIC's VMDQ numer is 8, and pre-set maximums rx buf is 4096.

Buf if we prepare the host with vCD(use VxLAN) or VIO(use VDS without VxLAN), the physical NIC's VMDQ numer is 4, and pre-set maximums rx buf is 512.

These change will obviously affect the network performance of our application. So I want to know is it the system implementation or something could be configured?

Any reply would be appreciated!

The normal situation of rx buf is like below:

Ring parameters for vmnic6:

Pre-set maximums:

RX:             4096

RX Mini:        0

RX Jumbo:       0

TX:             4096

Current hardware settings:

RX:             456

RX Mini:        0

RX Jumbo:       0

TX:             1024

[root@VDF109:~] ethtool -S vmnic6 |grep queue |grep packet

     tx_queue_0_packets: 0

     tx_queue_1_packets: 0

     tx_queue_2_packets: 0

     tx_queue_3_packets: 0

     tx_queue_4_packets: 0

     tx_queue_5_packets: 0

     tx_queue_6_packets: 0

     tx_queue_7_packets: 0

     tx_queue_8_packets: 0

     rx_queue_0_packets: 1084

     rx_queue_1_packets: 0

     rx_queue_2_packets: 0

     rx_queue_3_packets: 0

     rx_queue_4_packets: 0

     rx_queue_5_packets: 0

     rx_queue_6_packets: 0

     rx_queue_7_packets: 0

     rx_queue_8_packets: 0

after prepared with VxLAN or VIo, the value was changed:

[root@VDF109:~] ethtool -g vmnic8

Ring parameters for vmnic8:

Pre-set maximums:

RX:             512

RX Mini:        0

RX Jumbo:       0

TX:             4096

Current hardware settings:

RX:             512

RX Mini:        0

RX Jumbo:       0

TX:             4096

[root@VDF109:~] ethtool -S vmnic8 |grep queue |grep packet

     tx_queue_0_packets: 2878481859

     tx_queue_1_packets: 0

     tx_queue_2_packets: 1039548483

     tx_queue_3_packets: 0

     tx_queue_4_packets: 0

     rx_queue_0_packets: 2595905

     rx_queue_1_packets: 1351314766

     rx_queue_2_packets: 1365270610

     rx_queue_3_packets: 1735407715

     rx_queue_4_packets: 0

BR.

Haifeng.

0 Kudos
1 Solution

Accepted Solutions
groundsea
Enthusiast
Enthusiast
Jump to solution

I found the reason, it is because the MTU setting of VDS is 1600. in vCD(deployed with VxLAN) scenario, MTU must bigger than 1500,

so it can't be resolved, but in VIO(deployed with VDS) scenario, it can be changed to 1500, then the rx buf length and vmdq num will change back.

View solution in original post

0 Kudos
1 Reply
groundsea
Enthusiast
Enthusiast
Jump to solution

I found the reason, it is because the MTU setting of VDS is 1600. in vCD(deployed with VxLAN) scenario, MTU must bigger than 1500,

so it can't be resolved, but in VIO(deployed with VDS) scenario, it can be changed to 1500, then the rx buf length and vmdq num will change back.

0 Kudos