VMware Cloud Community
inforhunter
Enthusiast
Enthusiast

Network speed between VMs on the same host slower than on different host.

I have noticed that network speed between VMs on the same host is significantly slower than VMs to physical machine or VM on the other host.Is it normal?

Reply
0 Kudos
9 Replies
f10
Expert
Expert

Hi,

Can you share the details about the test and tools that you are using to run this test? Please note that the network traffic between VMs connected to the same switch does not leave the host and the data is sent/received by the L2 switch (vswitch) within the ESXi host.

You may use this KB to start troubleshooting VMware KB: Troubleshooting network performance issues in a vSphere environment

-f10

Regards, Arun Pandey VCP 3,4,5 | VCAP-DCA | NCDA | HPUX-CSA | http://highoncloud.blogspot.in/ If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
john23
Commander
Commander

AS f10 mentioned, VMs created on same host and connected to same vSwitch, outbound adapter never play any role, since vSwitch is L2 layer switch.

either you have different subnet configure for vm network portgroup, and routing is needed then outbound adapter plays role.

-A

Thanks -A Read my blogs: www.openwriteup.com
inforhunter
Enthusiast
Enthusiast

Yes, My VMs on the same host connect to a same virtual switch and in the same subnet.

In my opinion, the internal communication happens in the virtual switch should faster than happens between hosts.but the speed is only almost half when I do a test via copy files between VMs connected on the same vSwitch contrast to VM to physical machine.

I am not sure,I have a faint impression there is a  KB explains the situation. Do guys have a clear mind?

Reply
0 Kudos
john23
Commander
Commander

If internal communication happens it consumes cpu and memory resource, kind of software which consumes cpu cycle. There is a possibility in case of cpu or memory crunch you may observe a delay.

Can you try this use two different vSwitch and connect vm in different vswitch, then we will know where exactly the issue is lying.

also check "esxtop n" output, it will also show some details.

-A

Thanks -A Read my blogs: www.openwriteup.com
Reply
0 Kudos
CloudIx
Contributor
Contributor

I'm also having this same problem. Here is my setup and the testing that i have done.

Esxi 5.5

One Host (Dell r710) 2 Nics are connected to Dell switch with LAG set up. One computer also connected to this dell switch with 2 nics and also LAG setup. The vSwitch is setup with 2 NIC and Teaming is setup with "Route based on IP hash" I have 2 Vm's set up with vmxnet3. One VM is a server 2003 (32bit) the other is Server 2008 (32bit).

Using iperf with these arguments on the server "iperf.exe -P -r -s -i 2" and this on the Client "iperf.exe -P -d -r -c 192.168.0.1 -t 60 -i 2"

When i transfer a 6 gig file from either server to the windows 7 machine connected to the dell switch i get about a 675 Mbit/sec rate. "Esxtop n" Shows no dropped packet on the Tx or Rx side.

When I transfer the same file from server to server on the same vSwitch and same subnet i get about 5Mbit/sec rate.

Just to make sure it was not an OS issue, i created a new VM of windows 10 and did the same test. Again a slow rate of about 10Mbit/sec rate. It really seems to me that something is up with the vSwitch.

Also both of these servers was a physical that I converted to a VM. When it was a physical computer network speeds were as expected.

Reply
0 Kudos
Tjcole
Contributor
Contributor

I know this is an old post, but I'm experiencing the same issues. I have a brand new Dell R650 with six VM's on it, (server 2016 and 2019).

They only communicate at 12 MB/s when between themselves and other VM no matter what host they are on. My other Hosts (R710 and R430) do not have this issue. I create two new VM's on the affected host and they still only communicate at 12 MB/s

All my Hosts have Dual 10GB Fibre NIC's. I didn't experience this issue until I migrated VM's to my new Host (and my first clue something was wrong is it took most of the night to migrate just one VM (<256GB in size)).

What was the final resolution on this? Thank you

 

Reply
0 Kudos
arptech
Contributor
Contributor

did you find any solution?

Reply
0 Kudos
mustu521
Contributor
Contributor

I'm experiencing the same issue. Have a new DELL R650 server. Same host same vswitch, two VM inside only get about 100-200Mbps. Both using 10gb vm nics (vmxnet).

Reply
0 Kudos
arptech
Contributor
Contributor

I found the problem in the "write through or write back" setting in the created RAID set. if you set it to "write through" change it to "write back".

Reply
0 Kudos