VMware Communities
wwluo
Contributor
Contributor

Traffic always split into 4 rx queues (out of 5) regardless of RSS parameter for ixgbe NIC on ESXi host

I am trying to feed 10 Gbps of traffic to ixgbe NIC, and configure ixgbe to do RSS. But no matter what I set RSS to (tried 2, 8, 16), there are always 5 rx queues created and only 4 of them receiving traffic.

$ vmware -l

VMware ESXi 6.0.0 Update 1

ixgbe version:

ixgbe driver 4.5.1-iov

How is ixgbe loaded:

vmkload_mod ixgbe RSS="8,8" MQ="1,1" VMDQ="0,0"

ethool -S vmnic6:

     rx_queue_0_packets: 174100541

     rx_queue_0_bytes: 120559913472

     rx_queue_1_packets: 174051497

     rx_queue_1_bytes: 120768860764

     rx_queue_2_packets: 174518839

     rx_queue_2_bytes: 120811065782

     rx_queue_3_packets: 173746403

     rx_queue_3_bytes: 120625801992

     rx_queue_4_packets: 0

     rx_queue_4_bytes: 0

/var/log/vmkernel.log:

2017-08-15T14:04:31.633Z cpu25:77048)<6>Intel(R) 10GbE PCI Express Linux Network Driver - version 4.5.1-iov

2017-08-15T14:04:31.633Z cpu25:77048)<6>Copyright(c) 1999 - 2017 Intel Corporation.

2017-08-15T14:04:31.633Z cpu25:77048)PCI: driver ixgbe is looking for devices

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: Multiple Queue Support Enabled

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: Receive-Side Scaling (RSS) set to 8

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: Enabled/Disable CNA set to 1

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: CNA turned off when RSS  is enabled through mod_param

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_check_options: CNA disabled, 0 queues

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: Virtual Machine Device Queues (VMDQ) set to 0

2017-08-15T14:04:31.751Z cpu25:77048)<6>ixgbe 0000:01:00.0: FCoE offload feature is not available. Disabling FCoE offload feature

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe4 vector 0x85

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe5 vector 0x86

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe6 vector 0x87

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe7 vector 0x88

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe8 vector 0x89

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe9 vector 0x8a

2017-08-15T14:04:31.775Z cpu25:77048)VMK_PCI: 723: device 0000:01:00.0 allocated 6 MSIX interrupts

2017-08-15T14:04:31.775Z cpu25:77048)MSIX enabled for dev 0000:01:00.0

2017-08-15T14:04:31.775Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_alloc_q_vector: using rx_count = 512

2017-08-15T14:04:31.775Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_alloc_q_vector: using rx_count = 512

2017-08-15T14:04:31.776Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_alloc_q_vector: using rx_count = 512

2017-08-15T14:04:31.776Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_alloc_q_vector: using rx_count = 512

2017-08-15T14:04:31.776Z cpu25:77048)<6>ixgbe: 0000:01:00.0: ixgbe_alloc_q_vector: using rx_count = 512

2017-08-15T14:04:31.777Z cpu25:77048)<6>ixgbe 0000:01:00.0: PCI Express bandwidth of 32GT/s available

2017-08-15T14:04:31.777Z cpu25:77048)<6>ixgbe 0000:01:00.0: (Speed:5.0GT/s, Width: x8, Encoding Loss:20%)

2017-08-15T14:04:31.777Z cpu25:77048)<6>ixgbe 0000:01:00.0: 0000:01:00.0: MAC: 2, PHY: 18, SFP+: 5, PBA No: 400900-000

2017-08-15T14:04:31.777Z cpu25:77048)<6>ixgbe 0000:01:00.0: 00:25:90:e5:63:5e

2017-08-15T14:04:31.777Z cpu25:77048)<6>ixgbe 0000:01:00.0: 0000:01:00.0: Enabled Features: RxQ: 5 TxQ: 2

Tags (2)
0 Kudos
3 Replies
SureshKumarMuth
Commander
Commander

What is your CPU setup? How many cores?

Regards,
Suresh
https://vconnectit.wordpress.com/
0 Kudos
wwluo
Contributor
Contributor

The host has 16 physical and 32 logical cores.

I am not sure if there is any setting that limits the number of cores can be used by the hypervisor for polling packets.

0 Kudos
bluefirestorm
Champion
Champion

Only six MSI-X interrupt vectors were allocated.

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe4 vector 0x85

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe5 vector 0x86

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe6 vector 0x87

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe7 vector 0x88

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe8 vector 0x89

2017-08-15T14:04:31.775Z cpu25:77048)IntrCookie: 3545: cookie 0xe9 vector 0x8a

2017-08-15T14:04:31.775Z cpu25:77048)VMK_PCI: 723: device 0000:01:00.0 allocated 6 MSIX interrupts

2017-08-15T14:04:31.775Z cpu25:77048)MSIX enabled for dev 0000:01:00.0

And VMDQ is disabled

2017-08-15T14:04:31.643Z cpu25:77048)<6>ixgbe: Virtual Machine Device Queues (VMDQ) set to 0

Then I was gonna suggest to enable VMDQ to try to achieve close to 10Gbps throughput. I could only find a very old VMware KB (last updated June 2013) on how to enable it with the ixgbe driver. But I assume the process to enable it is still the same. Coincidentally, this same KB says ixgbe driver maximum RX queue is limited to 4 (under the Notes section after step 4).

Enabling Support for NetQueue on the Intel 82598 and 82599 10 Gigabit Ethernet Controller (1004278) ...

You should also ask Intel whether there is a way to increase the number of MSI-X interrupt vectors or is there a formula or rule of thumb on the number of MSI-X interrupt vectors (assuming it can be increased from six to some higher number) based on the number of TX/RX queues and VMDQs. Typically, most device driver queues would have its own interrupt vector.

0 Kudos