VMware Cloud Community
rcbsupport
Contributor
Contributor

Low Network Performance ESX ( and ESXi for that matter )

I have been trying to figure out why netiher ESX nor ESXi will achieve greater then what seems 80 to 110 Mbps (little "b"), on the high end, for file transfers of SCP, SFTP, NFS, or FTP on a gigabit network ( NICs, switch, CAT5e cables ). It doesn't matter which direction the file transfer goes: throughput still sucks.

I am getting these figures from the ESX box using 'esxtop' and vSphere Client as well as calculating the throughput on the windows box side.

I have tried the following:

different NICs

different cables

direct connection between file server and ESX/ESXi

just about every combination of settings on the File Server's NIC for performance tweaking

ethtool adjustments (I get the function not implemented)

different versions of ESX/ESXi (i.e. version 3.5)

File Server is running Win2K Server SP4 with Windows Services for Unix 3.5 and

Dual 1.1 PIII

4 GB RAM

U160 SCSI

Intel Dual Gigabit NIC (not teamed)

VMWare server is running ESX 4 Build 171294 / ESX 4i and

dual quad-core Intel X5472

16GB FBDIMM DDR-800

3.0 Gbps SAS 15k drives

Intel Quad Gigabit NIC (not teamed)

We aren't lacking on horsepower as you can see.

Why am I getting slow network performance for file transfers between these hosts?

How can I fix it?

Help...I am about out of Sobe.

Squirrel

Reply
0 Kudos
2 Replies
Datto
Expert
Expert

If you're just using a lab computer(s) (rather than production systems) you might try using a different virtual ethernet adapter type (e1000, vmxnet2, vmxnet3) assuming your operating system will support those. You can add another different type of adapter by going into Edit Settings/click the Add button/choose Ethernet Adapter/Under Adapter Type choose a different type and see if reconfiguring your VM to use that type of adapter will increase the thruput.

Since you've already tried a direct connect with cable then it's not a switch duplex mismatch.

Datto

rcbsupport
Contributor
Contributor

Interesting. I guess I didn't even know you could do that in a VM. That is certainly helpful.

I think it may be a slightly different issue then that as these transfers are between the VMFS partitions and another server. For example, when I restore a VM, I copy the VMDK files from the NFS datastore for backups to the local DAS for working VMs. This copy is where I am only getting 80 to 110 Mbps at best. It averages 50 to 70 Mbps.

Is there a way to change this NIC type for the Service Console OS or the VMKernel? Maybe that would help a bit? Do you know Any other reasons that would cause slowdowns for such file transfers?

Thanks again!

Squirrel

Reply
0 Kudos