VMware Cloud Community
ewanat
Contributor
Contributor

network throughput issues and install iperf on ESX/VI3.5

I'm having some network issues with our VI3.5 (update4) hosts. I tested the network throughput between VM's and VM's to physical machines and found out that there are differences at certain VM's.

Some VM's have a throughput (outgoing) of 70-80MB/s others have only half of it. The ones that have outgoing 70-80MB/s have incoming only 35MB/s.

Even when the VM's are on the same host, the same vSwitch and the same resource group and almost nothing is going on on the host.

The VMs and physical machines I have tested with are mainly Windows 2003 with SP2, but tested also with Windows XP.

I've tested the network throughput with the tool Jperf2.0.0 (Iperf).

I mainly tested with one parallel stream but noticed that with 3 or more stream I get a throughput at all machines of up to 95MB/s, of course that doesn't help when I don't use any multithreading network application that could take advantage of it.

The other thing I have noticed is that the VM's never use two physical NIC's they always use only one NIC.

I also like to test from the service console but I can't get Iperf to install (see attached log file).

As I checked for differences between the VM's (Windows 2003) with different network throughputs I saw that they have more/less options at the network card properties (see attached word doc file) and some have the Authentication tab and some don't.

Has someone seen similar issue? How can I get iperf to work on ESX or do you know another network throughput tool that works on ESX and Windows?

Enrico

Reply
0 Kudos
7 Replies
Brocksampson
Contributor
Contributor

I do not have an answer as to why, but I have the same issue. I have all copper gigabit nic/switch/nic and cannot get anymore than about 35mb per minute while imaging. Even within the same vSwitch I do not see any increased speed. My max seems to be around 35mb per minute. (roughly about two hours to image 4 gig which is crazy)

Reply
0 Kudos
ewanat
Contributor
Contributor

In your case could it be the source or destination hard disk drive which is the bottleneck? When you transfer something via FTP is it faster?

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

You have a confusing post since you are posting in the ESXi forum and also mention installing iperf in ESX. You might be able to compile a statically linked binary and add it to the ESXi host. For ESX you should be able to add it via a Redhat 3 compatible rpm.

-- David -- VMware Communities Moderator
ewanat
Contributor
Contributor

Thank you. I downloaded the iperf rpm from here , installed it, opened necessary firewall ports and it works.

Sorry for the confusion the topic is of course about ESX not ESXi, I hope I can get this thread moved in the correct forum.

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

Anyone who searches should be able to find it no matter which slot it is in. Hope it helps.

It would be great if you could post the results of your testing, any conclusions may come to. That would be valuable for the community at large.

-- David -- VMware Communities Moderator
Reply
0 Kudos
Brocksampson
Contributor
Contributor

I'm using 10,000rpm SCSI drives, I would think those would be quick enough. I'm still looking in to this issue. I'm sure its probably a lack of CPU or something. I will let you know what I find over time. (please feel free to let me know if you have any ideas as well)

As always, thanks for any and all help.

Reply
0 Kudos
ewanat
Contributor
Contributor

I still don't have an explanation why the NIV setting options are different but I found the cause why the test results are different.

There is a bug apparently in jperf/iperf. When you don't define a TCP Window size it usually uses by default 0.01MB (you see that in the output).

That was also shown at the VM which was faster but in reality it used a higher TCP Window size (maybe 56KB or more). If I manually configure the TCP Windows size in jperf/iperf to 64KB (which is normally configured in Windows with Gigabit connection) then I get a throughput of 90-117MB/s at every VM.

Reply
0 Kudos