Contributor
Contributor

ESXi 4 with poor Network Performance on GigE (only up to 20Mbps throughput)

Hello folks,

i have some serious problems with our fresh installed ESXi 4.0 Server on an DELL 2900 III 2x Quad-Core Xeon with 2 separeted GigE NIC's and one dual GigE onboard NIC. The network throughput is poor with a max. up to 19-20 Mbps with VMXNET driver in the Guest-VM's (the correct VMware tools were installed) enabled - changing from autosensing to strict 1000 Mbps and fullduplex didn't help either. Tested with SCP, FTP, NFS and SMB - always not more then 20 Mbps.

Does anyone know a hint for me? I just use one GigE Port for currently 2 VMs and i allready tested one of the single GigE NIC's instead of usinf th onboard dual NIC - it doesn't matter, same problem.

yours,

mga

0 Kudos
4 Replies
Expert
Expert

First test your storage performance. If it is 20 MB/s than network cannot be faster.:)

Enthusiast
Enthusiast

You can test network independant of storage with

PS Are you getting 20 Mb or 20 MB?

Message was edited by: wessie

Contributor
Contributor

Hello meisermn,

i just tested the throughput of the storage on the DELL, which resulted as follows:

# dd if=/dev/zero of=/fileserver/junk bs=4k count=125000

125000+0 Datensätze ein

125000+0 Datensätze aus

512000000 Bytes (512 MB) kopiert, 18,4871 s, 27,7 MB/s

#

So i think 27,7 MB/s is quite not much on an SAS RAID-5 Array with 10k Harddrives. SAS i thought would be around 300-600 Mbit/s / 37,5-75 MByte/s... Smiley Sad

The Server has a second faster an smaller Array with RAID-1 an 2x 15k Drives on wich i get a much better result:

# dd if=/dev/zero of=/tmp/junk bs=4k count=125000

125000+0 Datensätze ein

125000+0 Datensätze aus

512000000 Bytes (512 MB) kopiert, 10,2691 s, 49,9 MB/s

#

Ok, that was the Server itself, but my reason to open this thread was the NAS which we use for Backups. This is a 2TB Buffalo TeraStation II Pro Rack an we mapped an NFS Share on it to the Filesystem of the virtualized Fileserver. A test on this was poor:

# dd if=/dev/zero of=/nastest/junk bs=4k count=125000

125000+0 Datensätze ein

125000+0 Datensätze aus

512000000 Bytes (512 MB) kopiert, 69,5007 s, 7,4 MB/s

#

Just 7,4 MByte/s through NFS to the TeraStation. But maybe this testing method is not ideal over a NFS Share. Unfortunally the NAS has no Shell Access to test the throughput directly on the NAS. The NAS Device is equipped with S-ATA 1,5Gb Drives as a RAID-5 Array wit a total of 2TB Storage, connected by one GigE NIC. So S-ATA 1,5Gb should reach a max throughput of 150 MByte/s - is what the specification says. This would be "damned" 18,75 Mbit/s...

OMG!

Seems like i found the reason for the poor 19-20 MByte/s throughput Smiley Sad

Thx for the hint first to check the storage performance.

yours,

mga

0 Kudos
Contributor
Contributor

Hi wessie,

it was MByte per second, not Mbit per second what i tested before. The values were originally in Kbps, but i converted them to Mbps and the to MByte values.

NETIO was a great hint, thank you very much. With netio i tested internally between VM Guests, which resulted in about 460 Mbit/s throughput. A second test from an VM Guest to an external Win XP Workstation resulted in almost 550 MBit/s throughput. The values are almost identically in both directions.

One thing i do not understand is the built in netwok real time monitor of the vsphere client. The client always delivers "real time" values which differ from my tests with netio. From values when i receive an result of 460 MBit/s in netio, the real time monitor ot the vshere client only shows about 18900 Kbit/s... I am wondering why! I suggested that someone or something is calculating wrong...

yours,

mga

0 Kudos