Sam30
Enthusiast
Enthusiast

Virtual switch throughput

Jump to solution

I've a simple confirmation query.

Say I've one standard virtual switch created out of 4 physical nic's, each with 10Gb uplink.

Will throughput of that virtual switch be 40Gb or still just 10Gb ?

If it's 40Gb then how traffic is load balanced via each physical nic, is it evenly divided ?

Is there a way I can find which virtual machines traffice on that virutal switch is passing through which physical nic at any given point of time?

0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership

The virtual network adapter (which in this case is likely a vmxnet3 adapter) is internally connected to the virtual switch, not to the uplink itself. It's actually the same as in a physical world. Think of an Internet router. If this router has internal 100MBit/s ports, that's what you will see on your PC, but you'll most likely don't have a 100MBit/s connection to the Internet!?

André

View solution in original post

0 Kudos
16 Replies
a_p_
Leadership
Leadership

By default ESXi uses round-robin load balancing, i.e. distributing the VM's on different uplinks when powered on. There are other advanced policies available for Standard vSwitches and/or Distributed Switches too.

To find out about traffic and vmnic usage, I'd suggest you run esxtop from the ESXi host's command line.

André

0 Kudos

Hi,

It’s depending to your NIC teaming configuration. For example, when “Route Based on Originating Port ID” is configured as NIC teaming policy and you have 4 physical cards with 10Gbs bandwidth, max bandwidth is 10Gbs.

Read the below articles:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=200612...

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100408...

-------------------------------------------------------------------------------------
Davoud Teimouri - https://www.teimouri.net - Twitter: @davoud_teimouri Facebook: https://www.facebook.com/teimouri.net/
0 Kudos
Sam30
Enthusiast
Enthusiast

So even If I'm teaming 4 10GB physical NIC's under "Route Based on Originating Port ID” to create a virtual switch, my throughput will not be 40Gbps?

Coz as I see it, the VM's connected to that virtual switch will be actively using all the 4NIC's which is 10Gbps each.

My traffic from the VM's on that virtual switch will pass through from each of the 4 physical NIC's which is 10Gbps each.

0 Kudos
a_p_
Leadership
Leadership

Each of the uplinks will be utilized (assuming you run multiple VMs), but a single VM's virtual network adapter will be assigned to only one of the uplinks, i.e. with a max. of 10GBit/s.

André

0 Kudos
Sam30
Enthusiast
Enthusiast

Exactly so that makes the throughput of the VIRTUAL SWITCH to be 40Gbps.

Now if I'm teaming two Physical NIC's of different speed say 10Gbps & 1Gbps to make a virtual switch & let multiple VM's run on that virtual switch.

In esxtop I can see one VM is mapped to 10Gb NIC & other VM is mapped to 1Gb NIC but now if I loging to those VM's & see the bandwidth on their network adapters, for both it shows me 10Gbps.

While it should actually show me 10Gbps bandwidth on one VM & 1Gbps for other VM. Any idea why it's showing 10Gb on both?

0 Kudos
a_p_
Leadership
Leadership

The virtual network adapter (which in this case is likely a vmxnet3 adapter) is internally connected to the virtual switch, not to the uplink itself. It's actually the same as in a physical world. Think of an Internet router. If this router has internal 100MBit/s ports, that's what you will see on your PC, but you'll most likely don't have a 100MBit/s connection to the Internet!?

André

0 Kudos
Sam30
Enthusiast
Enthusiast

Ok well like I said I've two physical NIC's of speed 10Gbps & 1Gbps teamed to create one virtual switch which makes the througput of VIRTUAL SWITCH to be 11Gbps.

I know it'll now show 11Gbps on the Guest OS adapters but shouldn't it show the same speed as of the uplink which that Guest OS is using for the traffic flow?

Yeah it's a vmxnet3 and it displays 10Gbps while if I connect E1000/E1000E it displays me 1Gbps.

So I can see one of my virtual machines is using a 10Gbps uplink & other using 1Gbps uplink from the teamed NIC's but they don't show as per that.

Shouldn't a vmxnet3 be showing speed of 1Gbps instead of 10Gbps when its accessing the uplink of 1Gbps?

0 Kudos
a_p_
Leadership
Leadership

No, the speed that shows up in a VM (or a physical system) is that of the switch port (assuming the network adapter supports this speed) it is connected to, which - in case of a VM - is the port of the vSwitch. It will even show this speed with no uplink connected to that vSwitch, i.e. when you use the vSwitch for internal traffic only.

André

Sam30
Enthusiast
Enthusiast

So by default the speed of each port on vswitch is 10Gbps irrespective of what & how many uplinks we've ?

I never found the virtual port speed in any of VMware's article though.

So even in case I've created a vswitch with just 1 uplink of just 100mbps only & have multiple machines running on that vswitch, it's still going to show me 10Gbps speed in Guest OS for all the machines running with vmxnet3 & 1Gbps for all machines running with E1000/E1000E adapter?

0 Kudos
a_p_
Leadership
Leadership

That's absolutely correct. Think of it, which network speed would you expect to see between two virtual machines on the same vSwitch (independent of the uplink)?

André

5mall5nail5
Enthusiast
Enthusiast

Correct. the vmxnet 3 adapter has a virtual connection to the virtual switch at 10Gbps.  But if the physical link from the host to the physical network is through four 1 Gbps uplinks.  However, that does not give you 4 Gbps of throughput, it gives you 4 1 Gbps throughputs.  Think of it like a highway.  If you think about it as a 55 mph speed limit with one lane going each way, and then a 55 mph speed limit with 4 lanes going each way, that's the difference.  If you have a 4 lane in each direction at 55 mph you cannot put one big car on there and do 220 mph, you just have the ability to carry 4 cars at 55 mph.

0 Kudos
Sam30
Enthusiast
Enthusiast

Thanks so my understanding is each virtual port speed is 10Gbps irrespective of the uplinks. So it's like a physical switch which has 10Gig ports, Correct me If I'm wrong there?

Is there a way I can change the port speed to say 20Gbps or maybe 2Gbps without teaming Guest OS NIC's ?

0 Kudos
5mall5nail5
Enthusiast
Enthusiast

Sam30 wrote:

Thanks so my understanding is each virtual port speed is 10Gbps irrespective of the uplinks. So it's like a physical switch which has 10Gig ports, Correct me If I'm wrong there?

Is there a way I can change the port speed to say 20Gbps or maybe 2Gbps without teaming Guest OS NIC's ?

That's correct - the connection between any vmxnet3 and its virtual switch will be 10Gbps irrespective of any uplinks.  In fact, think about what happens when you create a vSwitch that is not associated with any physical adapters at all - you still have a 10 Gbps connection within the guest!

You can change the speed/duplex of the VMXNET3 adapter within Network Connections | Properties, you can choose from 10, 100, 1.0Gbps, and 10Gbps full and half duplex.  But you can't make it something that the driver can't support.  There's no such thing as a 2 Gbps ethernet card, nor a 20 Gbps, and when they're teamed, Windows reports it as that but it's summing the throughput, you are making your pipe fatter, not faster.

Sam30
Enthusiast
Enthusiast

If I'm making it fatter, I'm giving more space for traffic to flow so it can use both the teamed uplinks together for the data flow, eventually making it faster ?

0 Kudos
5mall5nail5
Enthusiast
Enthusiast

Sam30 wrote:

If I'm making it fatter, I'm giving more space for traffic to flow so it can use both the teamed uplinks together for the data flow, eventually making it faster ?

No - bandwidth is not the same as transfer speed.  If you team two 1 Gbps NICs and then access it from a node with a 10 Gbps NIC, you'll have an egress speed of 1 Gbps on the original server.  If you team up four 1 Gbps NICs and then access it from a node with a 10 Gbps NIC, you'll have an egress speed of 1 Gbps on the original server.  If you team up four 1 Gbps NICs and then access it from 2 nodes each with a 10 Gbps NIC then you will have an egress speed of 2 Gbps from the original server.  It's not going to let you transfer at 4 Gbps, it's going to let you have 4 channels of 1 Gbps.

0 Kudos
ShawnMcC
Contributor
Contributor

I concur, because I did some testing between 2 windows 10 VMs on the same ESXi 6.7 host using iperf3.  If you run the iperf you get a throughput of about 5.5 Gbit/sec

[ 4] 0.00-10.00 sec 5.59 GBytes 4.80 Gbits/sec sender
[ 4] 0.00-10.00 sec 5.59 GBytes 4.80 Gbits/sec receiver

but if you saturate the connection with the following switches: -P 16 -w 64k -l 128k -t 30, you can get 8.44 Gbit/sec

[SUM] 0.00-30.00 sec 29.5 GBytes 8.44 Gbits/sec sender
[SUM] 0.00-30.00 sec 29.5 GBytes 8.44 Gbits/sec receiver

all of that changes once you leave the internal switch and leave enter into an uplink. If it is 1Gbit/sec. that is all you are going to get.

0 Kudos