VMware Cloud Community
dlane1975
Contributor
Contributor

Poor network transfers VM to VM on same ESXi 5.5 Host

Need help on this from the experts in the community.

 

Brief overview of setup. We have an entirely new virtual environment intended to run our future production SQL servers.  We have a Compellent SC4020 SAN with 3 Dell Poweredge R630’s running ESXi 5.5 U2.  Our switches are all new 10Gbe switches dedicated to this project and we also purchased a 10Gb Switch for a datacenter backbone upgrade.  The dell Switches are N4032F’s for iSCSI and a N4064 for Top of Rack. Everything down to the power cables are new in this setup.

Things were moving along nicely until we handed the testing over to our DBA.  Our DBA first noticed some issues with performance during his initial restore of a database from our old physical environment.  He noticed his restore was taking twice as a long as in the old environment.  This began our research into our file transfer issue.

We started our troubleshooting to see where our issues seem to be.  After many days research we have simplified our tests to remove any variables.

The heart of the issue seems to be this:

2 Virtual Windows Server 2012 R2 running on the same ESXi host, connected to the same vswitch and port group cannot exceed 70 MBps (less than 1Gbps) network throughput.  We have tested this many ways but we cannot seem to break this barrier.

I have a lot of additional information but don’t want to cloud this simple issue with all the other things we have tried at the moment.

From the many articles I have read, I believe we should be getting very impressive network transfer rates using windows file copies or iperf etc, when the VM’s are on the same host and vswitch.  The traffic should never leave the internal components of the hardware.

At this point I feel like we are missing something simple.  There hasn’t been a lot of custom changes to the windows or vmware setup from default other than what is required to get our iscsi going. (jumbo frames etc) but I don’t think it’s our SAN at this time.

The simplest version of the issue is we cannot exceed 1Gbps throughput from VM to VM on the same ESXi host.

What is prohibiting us from reaching 10Gbps throughput or beyond in this setup. 

Any help would be greatly appreciated.   I have a lot of other information if it will help get to the bottom of this.

 

Darren

  • All VM's are Windows Server 2012 R2
  • Using VMXNET3 adapters with VMware Tools installed on VMs
  • iSCSI is configured using Jumbo Frames with Flow Control enabled (Although I don't think this is our issue)
  • Our 10Gbps Top of Rack switch is set with default configuration.  We are not using Jumbo Frames as we will have a mixture of 1 Gb and 10Gb connections.
  • All nics show they are connected at 10Gb.  Both Virtual Servers and ESX hosts.
  • Bandwidth Throttling is not enabled on the vswitch.
  • I can post whatever additional information you request. 
0 Kudos
2 Replies
ThompsG
Virtuoso
Virtuoso

Hi and welcome to the forums!

I'm going to assume that since these are SQL servers they have multiple processors assigned? Have you contemplated enabling RSS for the network stack:

vmware-rss.jpg

Is enabled by default at the OS level however disabled within the network driver. Please be aware this may cause a network outage while changing so do outside hours.

You mention that you have done a number of tests so sorry if we are going over old ground. Note you mention using iPerf - was performance poor when using this as well? This would likely rule out the storage as an issue if this was the case.

Kind regards.

0 Kudos
VirtualVanguard
Contributor
Contributor

Just curious, have you tried to run such network performance test on single vCPU Windows, or even other OS? It might gave us an answer if its more like infrastructure problem, or OS.

0 Kudos