Assuming there are no bottlenecks elsewhere, what is the maximum ingress network traffic that a single vmxnet3 adapter on a VM can receive from the network? Is it 10GBps (GigaBYTES per second) or 10 Gbps (GigaBITS per second)
Thanks!
In theory and in the physical world, the maximum data rate would be 10 Gigabit/s, since vmxnet3 emulates a 10GBASE-T physical link.
This bitrate is governed by physical signaling limitations on the wire of said standard, however these don't apply in a purely virtual setup (2 VMs on the same host and same vSwitch and port group).
Guests on the same host and vSwitch/port group are able to exceed well beyond 10Gbps. I know one would think that e.g. the e1000, which presents a 1Gbps link to the guest, is limited to 1Gbps maximum; or vmxnet3 is limited to a maximum of 10Gbps. But that is not the case. They can easily exceed their "virtual link speed". Test it with a network throughput tool like iperf an see for yourself.
That's because real physically imposed signaling limitations do not apply in a virtualized environment between two VMs on the same host/port group. Guest OSes don't artificially limit traffic to match the agreed line speed unless it is physically required.
To give you an example, I'm able to achieve 25+Gbps between 2 Linux VMs with a single vmxnet3 vNIC on the same host/network
For reference, I'm able to get 25+Gbps with the iperf network throughput testing tool between two Linux VMs with a single vmxnet3 vNIC on the same host/port group. (Yes, 25Gbps. Even if a vmxnet3 emulates a 10Gbit/s link, throughput is not artificially capped without the physical signaling limitation).
Once you get to external communication outside of a host then you are capped by your physical ESXi host's links limitations.
In theory and in the physical world, the maximum data rate would be 10 Gigabit/s, since vmxnet3 emulates a 10GBASE-T physical link.
This bitrate is governed by physical signaling limitations on the wire of said standard, however these don't apply in a purely virtual setup (2 VMs on the same host and same vSwitch and port group).
Guests on the same host and vSwitch/port group are able to exceed well beyond 10Gbps. I know one would think that e.g. the e1000, which presents a 1Gbps link to the guest, is limited to 1Gbps maximum; or vmxnet3 is limited to a maximum of 10Gbps. But that is not the case. They can easily exceed their "virtual link speed". Test it with a network throughput tool like iperf an see for yourself.
That's because real physically imposed signaling limitations do not apply in a virtualized environment between two VMs on the same host/port group. Guest OSes don't artificially limit traffic to match the agreed line speed unless it is physically required.
To give you an example, I'm able to achieve 25+Gbps between 2 Linux VMs with a single vmxnet3 vNIC on the same host/network
For reference, I'm able to get 25+Gbps with the iperf network throughput testing tool between two Linux VMs with a single vmxnet3 vNIC on the same host/port group. (Yes, 25Gbps. Even if a vmxnet3 emulates a 10Gbit/s link, throughput is not artificially capped without the physical signaling limitation).
Once you get to external communication outside of a host then you are capped by your physical ESXi host's links limitations.
Thanks!