Hi All
I currently have a issue whereby both my Unix and Windows machines have poor network performance between them (Or so my app teams say so)
Blades in the Cluster are running VMware ESXi 5.0.0 1489271
The vswitch in question for them has two 10Gb Nics - both active.
No jumbo frames are enabled in our environment and never will be.
Taking one Blade for example - lets call it dc1chassis1blade1 (barely being used about 30% CPU and 30% Ram use)
It has two windows machines (2008 R2) - VM1 and VM2 along with a few others
When running a iperf between them (Results below) - the speed reported is
VM1 - 10.0.0.1 (Single Nic) DG 10.0.0.254
iperf -s -w 256K
VM2 10.0.0.2 (Single Nic) DG 10.0.0.254
iperf -c 10.0.0.1 -P 1 -i 1 -p 5001 -w 256K -f m -t 10
DG 10.0.0.254
#
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 194 MBytes 1625 Mbits/sec
[ 3] 1.0- 2.0 sec 216 MBytes 1812 Mbits/sec
[ 3] 2.0- 3.0 sec 209 MBytes 1752 Mbits/sec
[ 3] 3.0- 4.0 sec 221 MBytes 1856 Mbits/sec
[ 3] 4.0- 5.0 sec 216 MBytes 1811 Mbits/sec
[ 3] 5.0- 6.0 sec 249 MBytes 2092 Mbits/sec
[ 3] 6.0- 7.0 sec 180 MBytes 1507 Mbits/sec
[ 3] 7.0- 8.0 sec 176 MBytes 1474 Mbits/sec
[ 3] 8.0- 9.0 sec 205 MBytes 1723 Mbits/sec
[ 3] 9.0-10.0 sec 171 MBytes 1436 Mbits/sec
[ 3] 0.0-10.0 sec 2037 MBytes 1709 Mbits/sec
or in Mbytes
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 225 MBytes 225 MBytes/sec
[ 3] 1.0- 2.0 sec 299 MBytes 299 MBytes/sec
[ 3] 2.0- 3.0 sec 352 MBytes 352 MBytes/sec
[ 3] 3.0- 4.0 sec 368 MBytes 368 MBytes/sec
[ 3] 4.0- 5.0 sec 177 MBytes 177 MBytes/sec
[ 3] 5.0- 6.0 sec 208 MBytes 208 MBytes/sec
[ 3] 6.0- 7.0 sec 263 MBytes 263 MBytes/sec
[ 3] 7.0- 8.0 sec 238 MBytes 238 MBytes/sec
[ 3] 8.0- 9.0 sec 259 MBytes 259 MBytes/sec
[ 3] 9.0-10.0 sec 275 MBytes 275 MBytes/sec
[ 3] 0.0-10.0 sec 2663 MBytes 266 MBytes/sec
That output is based on vmxnet3 drivers and the latest tools for the host (8.6.12 build 1480661)
Surely i should expect to see much more network performance between them? -
Apparently the app team ran this test last month and the lowest results were 4800Mbits/sec+ (600MBytes/sec+)
So why have they become slow? - its not just these VM's with the issue its all of them in the cluster appear to be effected - even the Unix machines
Please let me know if you require any more details on the environment / if anyone has had similar issues / suggestions ?
One thing i could try is installing the latest tools from vmware 9.x - but in my environment the speeds were fine before so i dont think its a driver issue
This example is on the same blade - between blades its the the same results - you would expect VM's on the same blade to have higher speeds or am i missing something here?
Any help is appreciated
Thanks
Hi,
I have similar issue with HP Blade C7000 and vSphere 5.1. I have two 10Gb uplinks per each enclosure.
Network performance is slow on my Windows 7 32/64 bit Professional but we have good performance on Windows XP virtual machines.
Also I have some ThinPC VMs and performance is very good on them. I think, there is a problem related to a MS patch or the OSes.
If you find root cause, please share it with me.