Poor network performance between VMs on different ESX servers


I'm testing the network performance I can achieve using JMS between two RHEL4 update 5 VMs running on ESX 3.0.2 update 1.

Running my test between two physical machines I can saturate the 1 Gb network but running between two VMs (on different ESX servers, but on the same class of blade) I only achieve 50% of the throughput.

After searching on these forums I carried out the following:

- disabled USB support in the ESX servers

- disabled IPV6 in the VMs

During the first tests the VMs ran at 100% CPU so I assigned 4 vCPUs. When running with 4vCPUs the VMs ran at approx 2 physical CPUs.

The physical hardware is HP ProLiant BL460c G1 (2 * quad core Intel Xeons at 2.33 GHz) with 2* Broadcom NetExtreme II BCM5708 Gigabit adapters per blade.

Has anyone any experiences tuning network performance with this sort of set up and can suggest what I should be looking at to improve the % of theoretical throughput I achieve?



0 Kudos
2 Replies

First of all, I think we need more info.

We are using the same kind of blades and our network performance is just fine.

What is the performance you are getting between the 2 physical Machines ?

Tell us something about the hardware config/nertwork connectivity of these physical machines.

You are using blades , right ?

Are both Blades in the same Blade-enclosure or not ?

What is the Network Infrastructure like ? Ar you using pass-through modules, Virtualconnect or integrated switches.

The Virtual Machines, do they use raw luns on a san, VMFS Partitions on a San , or Locally stored VMFS partitions.

Do your Physical servers have battery backed write cache ? Enabled ? how much cache memory ?



0 Kudos


Do u have vmware tools installed in the VM's. Because by default VM's are provided with 10Mbps card, vmware tools do the masking to 1gbps... Hope this might help u.. Smiley Happy

0 Kudos