VMware Cloud Community
MagicRecon
Contributor
Contributor

VM network bandwith, not getting 1 gigabit

I have multiple hosts in a IBM bladecenter, but i'm not getting the troughput i would like. When i copy files from one VM to another on different or even on the same host i get 8-12 megabyte per sec. when converted, is about 100 megabit instead of 1000 megabit i would expect.

i've changed the adapters of some of my VM's from the E1000 to VMXNET 3 adapters and this improves the troughput a bit but its still not like it should.

There are no limitations configured and all interfaces are at least 1gb.

Any sugestions ?

0 Kudos
7 Replies
NinjaHideout
Enthusiast
Enthusiast

Make sure that the source and the target VMs lie on different datastores. Each datastore should also preferably lie on a different LUN / array / disk.

What kind of storage are you using? If iSCSI or NFS, is the storage connection done via a gigabit pNIC?

0 Kudos
MagicRecon
Contributor
Contributor

The storage is fiber channel, i'll check the relation between the speed and the location on the datastore

0 Kudos
NinjaHideout
Enthusiast
Enthusiast

Another thing: for testing "pure network performance" (i.e. without the risk of datastore performance interfering with your measurement, like in the case of copying a large file from a VM to another), you can use iperf ( http://sourceforge.net/projects/iperf/ ). It has a server-client model, and it transfers random data at the maximum possible speed.

NinjaHideout
Enthusiast
Enthusiast

Hmm if it's FC, then the storage shouldn't be the bottleneck.

Is there maybe some antivirus running on the VMs? Which probably scan files "on the fly"? It might attempt to scan the file as it is transferred.

0 Kudos
MagicRecon
Contributor
Contributor

With iperf i've isolated the problem

i get 1gbps on VM's on same host : great !

i get 400mbps on different hosts in same bladecenter : ok

i get 70mbps on 'external' ESX host connected to the blade switch : i want faster !

i really want to get this last speed up.

can ESX 4.1 to ESXi5 have anything to do with it ?

0 Kudos
NinjaHideout
Enthusiast
Enthusiast

I doubt the upgrade has anything to do with it.

Another thing you can do, is check the network performance graphs for each ESX host (for inter-host traffic). Host > Performance tab > Advanced > Switch to: Network > Chart Options. There you can view lots of interesting infos for the pNICs, like packets received/transmitted, drop rate, usage, etc.

0 Kudos
RParker
Immortal
Immortal

MagicRecon wrote:

With iperf i've isolated the problem

i get 1gbps on VM's on same host : great !

i get 400mbps on different hosts in same bladecenter : ok

i get 70mbps on 'external' ESX host connected to the blade switch : i want faster !

i really want to get this last speed up.

can ESX 4.1 to ESXi5 have anything to do with it ?

Going to give you a purely topical view of Networking.  Network is NEVER the bottleneck.  You will NEVER saturate the Network, there are too many variables, disk spindle speed, CPU usage, disk type, type of data, encryption, and other Network related traffic will ALL affect file copy speeds, not to mention your Switch is probably doing some level of packet inspection which will diminish the speed.

On that same note you are getting 70MBs, what leads you to believe you will GET faster simply because you WANT it faster?  Maybe that's as fast as the Disk, to Fiber, to SAN, over a switch using these files will get.. there may not be ANYTHING wrong.

Get SSD drives, put them in the server, do a DIRECT cross overcable to your Fiber connected device THEN you can difinitevely PROVE where the problem is.. It's NOT the network, period.

0 Kudos