VMware Cloud Community
zmichl
Contributor
Contributor

Poor network performance on ESX 4

Hi,

I'm experiencing some strange problems with the network performance on my esx hosts. I have 4 Dell R710 servers, every one with 4x onboard 1gbit-nics and 2 Intel 10gbit-nics (all on the vmware hcl).

I have a dvswitch with 2 active 10gbit-uplinks and one 1gbit-uplink as standby adapter from each server. In addition, each host has a local vswitch with one consoleport and a vmkernel-port for vmotion. This vswitch has one active and one standby 1gbit-uplink.

I use netio to do network performance tests in my virtual machines. The VMs are connected to a portgroup on the dvswitch which uses the 2 10gbit-adapters as uplink.

When I do this tests, I am getting very slow network troughput at about 50 megabyte/s if the machines are on the same host, and only about 30mb/s if the machines are on different hosts. the guest os is windows server 2003-r2 datacenter 64.

I first thought of a hw-malfunction or maybe a wrong driver for the 10gbit-nics, but if I configure one of the 1gbit onboard-nics as the only active uplink for the vm-portgroup i am getting even lower transfer rates.

The problem seems to exist in the service console also: When I do some ftp transfers, the bitrate is also far below the value it should be.

I've already double-checked, that the nic-config matches speed/duplex-settings of my physical switching-hw, so there should be no problem. The error counters in the switch are all at 0. At the moment, the setup is: auto negotiation with no flow control - but I also tried other values (e.g. 100-full, 1000-full, etc.) In addition, when I do the tests on physical boxes connected to the same switch, I get adequate transfer rates.

Does anyone have an idea how to further investigate into this?

Thank you,

zmichl

0 Kudos
7 Replies
AntonVZhbankov
Immortal
Immortal

Try to change VM NIC to VMXNET3 on both VMs and test with it.


---

MCSA, MCTS Hyper-V, VCP 3/4, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
0 Kudos
zmichl
Contributor
Contributor

Sorry, I forgot to mention: The nics in the virtual machines are vmxnet3...

0 Kudos
zmichl
Contributor
Contributor

I just tried a vmxnet2, and got out about 100mb/s. I think thats better, but still not enough for 10gbit/s...

0 Kudos
depping
Leadership
Leadership

Can you check with ESXTOP if you see any dropped packets on the vswitch layer?



Duncan

VMware Communities User Moderator | VCDX

-


Now available: <a href="http://www.amazon.com/gp/product/1439263450?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1439263450">Paper - vSphere 4.0 Quick Start Guide (via amazon.com)</a> | <a href="http://www.lulu.com/product/download/vsphere-40-quick-start-guide/6169778">PDF (via lulu.com)</a>

Blogging: http://www.yellow-bricks.com | Twitter: http://www.twitter.com/DuncanYB

0 Kudos
zmichl
Contributor
Contributor

There are no dropped Packets at the vswitch layer. Esxtop shows up %DRPRX and %DRPTX both at 0 on all machines on every host.

I've just done another test with 2 machines running windows xp prof. - each with 2gb ram, 1 vcpu and vmxnet3 nic. I've done a ftp transfer of a 2gb file. If the machines are on the same host, the transfer rate will be at about 60mb/s, and about 20mb/s if the machines are on different hosts. Before I've done this, I ensured that the storage is not the limiting factor in this case (beacause of the rad/write of the 2gb file on the 2 machines).

This is driving me crazy!

0 Kudos
jussijaurola
Enthusiast
Enthusiast

What is the storage used for virtualmachines? Local sata-disks with only few disks on raid might cause this kind of situation because you have also poor performance when virtual machines are on the same host and in that case traffic never even passes the physical network card. Even that you have 10 GB links it's fact that you cannot get full bandwith with slow storage.

0 Kudos
ozwil
Contributor
Contributor

Hi zmichl

Did you ever get to the bottom of your slow network performance issue? I ask because we are seeing similar but not quite as bad on Dell R910 servers. We just can't seem to drive the 10GB NICs very hard.

Cheers

0 Kudos