VMware Cloud Community
fiatlux
Contributor
Contributor
Jump to solution

Unable to send/receive at full line rate with Intel dual ports 10 GIG 82599 ethernet in guest OS on ESXi 5.5

I am investigating whether or not it's feasible to send/receive at 20 gbps throughput with an Intel 82599 dual-port card inside a guest VM on ESXi. I'm using ntop's PF_RING DNA to send and receive traffic. I have configured the dual-port card in Direct IO mode to allow the guest VM have direct access of the card. For the traffic generator, I'm using a bare metal machine with another dual-port ethernet to generate traffic. The bare metal machine generator and ESXi have their dual-port ethernet directly connected via SPF+ cables.

In first experiment, the traffic generator is generating 10 gbps of traffic over 1 port. The guest VM is able to capture the packets at full line rate without any packet loss even for 64 byte packets, which translates to 14mil pps. I then had guest VM transmit at 10 gbps to the traffic generator, which is now receiving. However, guest VM was only able to generate around 7 gbps of traffic regardless of packet size. The CPU appears to be fine at around only 20% load. I don't understand why VM can receive at full line rate but couldn't transmit at line rate.

In second experiment, the traffic generator is generating 20 gbps of traffic over 2 ports. The guest VM unfortunately can only capture at a rate of 5-6 gbps per port for an aggregate of 10-12 gbps. I have the receiver process listening to each port on a different CPU core. Then, I had VM generate 10 gbps on each port. Unfortunately, it was only able to generate 5-6 gbps per port regardless of packet size.

The bare metal machine is only an AMD 2.2ghz quad core low end machine. It can send at full line ate of 10 gbps for 1-port and 20 gpbs for 2-port. The ESXi server is actually a much beefier machine with Intel Xeon 3.2 Ghz quad core. There are plenty of memory and CPU on ESXi machine.

The ESXi server is running 5.5. I only have 1 guest VM running while running the experiment.

I am quite puzzled why there is bottleneck even in Direct IO mode. Both CPU an memory appear to be fine. I wonder if there is any setting I should be playing around with in ESXi. I appreciate any help. Thanks in advance.

0 Kudos
1 Solution

Accepted Solutions
fiatlux
Contributor
Contributor
Jump to solution

It turned out that I placed the network card in pci slot 3 which is only x4 routing. I moved it to slot 1 (x8). Now, i get full 20 GIG.

View solution in original post

0 Kudos
1 Reply
fiatlux
Contributor
Contributor
Jump to solution

It turned out that I placed the network card in pci slot 3 which is only x4 routing. I moved it to slot 1 (x8). Now, i get full 20 GIG.

0 Kudos