VMware Cloud Community
Karl12347
Contributor
Contributor
Jump to solution

vmxnet3 driver performance

I am looking into increasing performance on our environment.

We have a DR cluster consisting of 5 x version 5.1 esx hosts, running 46 guests.

I am considering updating to hardware versions from 7 to 9 on the guests for our database servers and know that this includes support for vmxnet3 driver with 10GB network connections.

My question is, if the underlying hardware switch has only 1GB ports on the physical switch. If the guests are on the same host will they communicate within the hypervisor at speeds of 10GB or will it still be limited to 1GB.

Any help would be appreciated.

Thanks in advance

Karl

0 Kudos
1 Solution

Accepted Solutions
MKguy
Virtuoso
Virtuoso
Jump to solution

I am considering updating to hardware versions from 7 to 9 on the guests for our database servers and know that this includes support for vmxnet3 driver with 10GB network connections.

vmxnet3 is already supported since VM hardware version 7.

My question is, if the underlying hardware switch has only 1GB ports on the physical switch. If the guests are on the same host will they communicate within the hypervisor at speeds of 10GB or will it still be limited to 1GB.

Guests on the same host and vSwitch/port group are able to exceed well beyond 1Gbps no matter what vNIC you use. I know one would think that e.g. the e1000, which presents a 1Gbps link to the guest, is limited to 1Gbps maximum; or vmxnet3 is limited to a maximum of 10Gbps. But that is not the case. They can easily exceed their "virtual link speed". Test it with a network throughput tool like iperf an see for yourself.

That's because real physically imposed signaling limitations do not apply in a virtualized environment between two VMs on the same host/port group. Guest OSes don't artificially limit traffic to match the agreed on line speed unless it is physically required.

To give an example, I'm able to achieve 25+Gbps between 2 Linux VMs with vmxnet3 on the same host/network with iperf.

The main advantage of the vmxnet3 vNIC is that it provides offloading and paravirtualization features, which is reducing the CPU load imposed by high network throughput. I'm running it for years since 4.x on Windows and most Linux VMs as our standard vNIC.

-- http://alpacapowered.wordpress.com

View solution in original post

0 Kudos
2 Replies
MKguy
Virtuoso
Virtuoso
Jump to solution

I am considering updating to hardware versions from 7 to 9 on the guests for our database servers and know that this includes support for vmxnet3 driver with 10GB network connections.

vmxnet3 is already supported since VM hardware version 7.

My question is, if the underlying hardware switch has only 1GB ports on the physical switch. If the guests are on the same host will they communicate within the hypervisor at speeds of 10GB or will it still be limited to 1GB.

Guests on the same host and vSwitch/port group are able to exceed well beyond 1Gbps no matter what vNIC you use. I know one would think that e.g. the e1000, which presents a 1Gbps link to the guest, is limited to 1Gbps maximum; or vmxnet3 is limited to a maximum of 10Gbps. But that is not the case. They can easily exceed their "virtual link speed". Test it with a network throughput tool like iperf an see for yourself.

That's because real physically imposed signaling limitations do not apply in a virtualized environment between two VMs on the same host/port group. Guest OSes don't artificially limit traffic to match the agreed on line speed unless it is physically required.

To give an example, I'm able to achieve 25+Gbps between 2 Linux VMs with vmxnet3 on the same host/network with iperf.

The main advantage of the vmxnet3 vNIC is that it provides offloading and paravirtualization features, which is reducing the CPU load imposed by high network throughput. I'm running it for years since 4.x on Windows and most Linux VMs as our standard vNIC.

-- http://alpacapowered.wordpress.com
0 Kudos
goppi
Enthusiast
Enthusiast
Jump to solution

I can only agree to what MKguy said.

We also use vmxnet3 for a few year as default nic type because it lowers cpu usage on the vms due to it's offloading capabilites.

LAN communication speed between two VMs on the same host is not limited by network hardware.

Cheers.