VMware Cloud Community
rcbrown1
Contributor
Contributor

Bandwidth Bottleneck

I have a SN550 node in a flex chassis with an En4093R switch and a physical ST550 all with 10Gbe network capabilities.

I have the vmnic's segmented into 3 networks, Management/Vmotion (1000mbps), iSCI (6500mbps), VM Traffic (2500mbps). When copying data between vm's (using VMXNET3) im getting just under 2500mbps. however when copying to external devices, including the ST550, i only get 1000mbps. when checking with ESXTOP i noticed when copying externally the management 1000mbps network is capping out. therefore as a test i changed the networking speeds to 2500mbps,2500mbps,5000mbps through the bios of the node.

however im still getting max 1000mbps throughput (checked with iperf and file transfer) when using my ST550. I would expect somewhere around 2500mbps.

i tried using iperf from the esxi host to the vm and i only get 1000mbps. any idea why this is happening?

 

 

0 Kudos
8 Replies
Lalegre
Virtuoso
Virtuoso

Hey @rcbrown1,

So I am presuming this scenario you are explaining was done with NIOC configuration. Please answer these questions to have more insight:

  • Are your physical ports configured in Auto-Negotiate?
  • Are your physical adapters configured in Auto-Negotiate?
  • What happen if you do the copy between two VMs inside vSphere but between two different ESXis?
  • Check that you do not have any limit applied to the VM vNIC.
  • Check that you do not have any traffic shaping applied to the portgroup or a limit into the physcal ports

Those are some points to start and see what is happening. Also I remember reading in the past ESXi by default has a mechanism to limit some speeds from single streams to avoid the collapse of the interface but I cannot recall where did I read so I will try to find that also as maybe I am getting confused.

0 Kudos
rcbrown1
Contributor
Contributor

Hi Lalegre,

- Physical ports are not set for auto negotiate. they are hard coded in the bios.

- the physical adapters (im assuming vmnics, right?) some are auto negotiate. I tried flipping a couple of the vmnics to see if that was the issue

- Copying data beween two vm's on different esxi's i get just under 2500mbps.

- looking through the networking i dont see any limits anywhere except on the vmnics of 2500,2500,5000

- no traffic shaping applied to port groups or vswitches. i tried playing around with these settings however had no success so i disabled it

i have not heard of ESXi setting limit speeds like that. however very interesting, i will look around and see as well 

 

0 Kudos
rcbrown1
Contributor
Contributor

I realized i missed one of your questions

No, NIOC is only done on Distributed Switches, correct? 

every ESXi host is using their own vswitches

0 Kudos
Lalegre
Virtuoso
Virtuoso

Oh okay then how are you segmenting the limits for the different traffics?

0 Kudos
rcbrown1
Contributor
Contributor

When accessing the BIOS there is a network tab where it shows my 2 Port 10Gbe Cn4052S card.

in there i tag nic0-1 with 10% = 1000mbps and vlan 1000

nic2-3 with 25% = 2500mbps and vlan 1001

nic5-6 with 65% = 6500mbps and vlan 100

nic7-8 with 0% = 0mbps and vlan 999 (not used anywhere)

from there on my En4093R switches i map vlans 100,1001,1000 to all internal esxi hosts and map the 1001 - TOR switches and 100 to my SANS.

from there within vmware i see vmnic0-1 with 1000mbps and then i map this for management/vmotion, vmnic2-3 is where all vm's are connected to, vmnic4-5 iSCI network for sans.

0 Kudos
Lalegre
Virtuoso
Virtuoso

Then I think you said what the issue is.

Basically this limit you are applying from your switches side is not understandable by the ESXi Hypervisor as it understands 1GbE or 10 GbE and the limit is being applied there. 

With 10 GbE I recommend you to let the ESXi manage the throughput for the services and in case you have the license, to use NIOC in the future.

You have a dual port NIC which is probably connect to your two internal switches and then to your TORs so the failover policy of the VSS which is probably Route based on source port id will do a Round Robin of the connections and I am pretty sure it will not impact your performance.

Unless you have a specific requirement to do these limitations.

0 Kudos
rcbrown1
Contributor
Contributor

i do not have a requirements to do this.

when the rack was built 3 years ago two lenovo techs came out to help build the infrastructure. They helped with the hardware, install vmware on all the nodes, install vsphere, xlacrity admin and integrate with our old rack to use as a DR.

if this is the case, should i be removing the limits and put all the vlans to 10Gbe? Unfortunately did some digging and NOIC is part of the enterprise license. we cannot afford it at this time.

 

0 Kudos
Lalegre
Virtuoso
Virtuoso

I recommend you to use the VLANs to 10GbE.

See using limits is most of the time useful if you start experiencing some issues as I would not use it as a preventive mechanism. Whenever you use limits you need to monitor how the applications and everything works as the vSphere mechanisms are pretty solid when talking about redistributing the traffic and managing the priorities.

0 Kudos