VMware Cloud Community
magander
Enthusiast
Enthusiast

Poor network performance between VMs

Hi,

i'm experience poor network performance and i wonder if anyone got any ideas how to solve this problem.

When running iperf (no customization without changing running time to 60 seconds) internally on a Windows 2008 R2 VM i reach about 428 MB/s.

When running iperf (no customization without changing running time to 60 seconds) between two Windows 2008 R2 VMs in the same ESXi server i reach about 200 MB/s.

When running iperf (no customization without changing running time to 60 seconds) between two Windows 2008 R2 VMs in the same ESXi server i reach about 110 MB/s using 10 Gbps network. Looks very much like a 1 Gbps network but it is not.

A few TCP/IP settings has been tuned in the Windows VM/VMs:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP\Parameters

o TcpTimedWaitDelay 30

o MaxUserPort 32768

o TcpMaxDataRetranmission 5

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\AFD\Parameters

o EnableDynamicBacklog 00000001

o MinimumDynamicBacklog 00000020

o MaximumDynamicBacklog 00001000

o DynamicBacklogGrowthDelta 00000010

o KeepAliveInterval 1

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{Interface GUID}*

o TcpNoDelay 1

o TcpAckFrequency 1

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\DisableTaskOffload has been set to 0,1 and 16.

TCP global parameters looks like this:

----------------------------------------------

Receive-Side Scaling State: disabled

Chimney Offload State: automatic

NetDMA State: disabled

Direct Cache Acess (DCA): enabled

Receive Window Auto-Tuning Level: normal

Add-On Congestion Control Provider: ctcp

ECN Capability: disabled

RFC 1323 Timestamps: disabled

I'm using ESXi 5.0.0 build-702118

I have seen the below message in the VMkernel log file indicating that the network spped is not limited to 1 Gbps:

VMotionSend: 3508: 1352573741598557 S: Sent all modified pages to destination (network bandwidth ~843.801 MB/s)

vMotion using the same NICs as the VMs during the vMotion test.

Anyone?

0 Kudos
7 Replies
HeathReynolds
Enthusiast
Enthusiast

Are the hosts in the same VLAN/Port Group?

My sometimes relevant blog on data center networking and virtualization : http://www.heathreynolds.com
0 Kudos
magander
Enthusiast
Enthusiast

Hi,

the ESXi server management ports are located on the same VLAN and the VMs are located on the same VLAN (same port group when running on the same ESXi server)

//Magnus

0 Kudos
CyberTron123
Enthusiast
Enthusiast

Hi TS

I have the exact same problem, we have a case against vmware on this. If you are having the same problem try this:

iperf -c 192.168.1.1 -i 1 -w 380k

now you should be getting full speed!

however, my case has been going for like 2 weeks now with no resolution

/Michael

0 Kudos
Priby
Contributor
Contributor

Hello,

any news? I have exactly the same problem. My network speed from VM to VM on the same host is about 5 MBits/s.

0 Kudos
Corpus
Contributor
Contributor

Any solution to this?

We have this too.

ESXi 5.0 U2 on HP DL580 G7

Running two Win2008 r2 64bit test VMs with E1000

We have tested several different senarios and bumped into this problem looking for a performance issue.

The strange thing about this is that two equal Linux Debian 6 64bit VMs do just fine in all test cases.

But when monitoring with ipref the Windows VMs performed extreamly poor when on the same vSwitch, within the same VLAN.

Making the parameter change to the cmd-line as sugested here, did the trick.

Why does windows make such poor performance in this test case? Can it be that this also applies to some file transfers and other network traffic as wel?

0 Kudos
rickardnobel
Champion
Champion

Corpus wrote:

Running two Win2008 r2 64bit test VMs with E1000

In general it is good to use the VMXNET3 adapter which is more optimized.

Making the parameter change to the cmd-line as sugested here, did the trick.

Why does windows make such poor performance in this test case? Can it be that this also applies to some file transfers and other network traffic as wel?

The iperf test might be somewhat "constructed" making the default Windows TCP behavior need to be tweaked with different options to have it open several sessions to be able to push through large amount of data.

However, it could be interesting to actually test some native Windows networking, like for example copy a large file from a VM to another and just watch in Task Manager what kind of bandwidth you get. (Do check that the storage will not be the bottleneck however.)

My VMware blog: www.rickardnobel.se
0 Kudos
MKguy
Virtuoso
Virtuoso

As already mentioned, use the vmxnet3 vNIC.

Iperf on Windows just performs really sub-par with a single TCP stream. You need to use multiple parallel sessions (-P) and/or increase the TCP window size (-w) to get any decent results.

On Linux, even with a single TCP session I can easily achieve 20+ Gbit/s with vmxnet3 and 2 VMs on the same host and port group.

-- http://alpacapowered.wordpress.com
0 Kudos