VMware Cloud Community
fars
Enthusiast
Enthusiast

ESXi export to vmdk limiting speed factor? Didn't change after 5Gbe upgrade network + faster drive

Hi, I'm curious if there is a hidden limiting transfer factor with exporting a VM from the GUI to Vmdk/Ovf/Mf files.

Source in ESXi is always:

  •   ESXi 7.0.2 on idling Ryzen 9 5950X (16C32T)
  •   export from a PCIE4 NVMe Samsung 980 SSD

Destination is always:

  • idling Windows10 Threadripper 1950x (16c32T)

 

Upgrading the following produced no change in export speed:

  • 5Gbe ethernet connection replaced 1Gbe
  • destination drive upgrade from magnetic platter to PCIe3 NVMe SSD

 

From monitoring the network connection I saw I had to add management function into the 5Gbe connection because the export was running over that.  The data is definitely running over the 5Gbe connection (can see in the switch).

 

Why would removing the 2 biggest bottlenecks on speed result in no change?

Thanks for any help, Far

Reply
0 Kudos
3 Replies
Taz79
Contributor
Contributor

What speeds are you getting?

I think there is a hidden hard cap in transfer speed.. we are seeing around 100-150Mbit/s when exporting a VM .. Trying Vmware Converter gets the same speed when moving a machine between 2 hosts.. The network speed is 1Gbit and there is no network congestion... I suspect there is a speed limit constructed by VMware.. i was also searching for this because this is kind of annoying when moving large machines... 

Reply
0 Kudos
fars
Enthusiast
Enthusiast

Yeh I wonder if there is a cap somewhere. 

The ESXi host is idling as is destination machine and 1G network is quiet but speed is sitting around 210-230Mbit.

Maybe export is a CPU thread limited process on the ESXi host? 

I was seeing CPU at ~8% for export being the only task which is about 2ish threads

Reply
0 Kudos
Y_U_NO_DIE
Contributor
Contributor

Ran into this problem today with an old server with a pair of 2697v2 CPUs.

The connection is just a gigabit but it would only yield 11MB/s to 25MB/s of speed when exporting a machine while none of VMs were running.

It's a test environment and I got full admin access to it so first I run iperf3 and got a healthy 930-950mbit/s of bandwidth.

Directly copying a file using SCP would offer up to 45MB/s but very unstable sometimes plummeting below 10MB/s.

I did observations with esxtop and concluded that the most likely bottleneck is the compression that happens to the very larger VMNAME_FLAT.vmdk into VMNAME.vmdk while transferring and the util doing it is strictly single-threaded because all cores would sit at 0-0.2% while one would be completely pegged.

People saying it must be an intentional bottleneck got probably got the misconception from the fact that you can export many VMs at the same time and each would work at same speed before saturating the NIC but it's because each process would take one core and servers usually got lots.

So dear VMWare team, don't you think it's time to either spend some quality time with the Greek oracle ladies (the ones handling.. threads you know) or at least give us some kind of choice such as compression quality or client-side compression? One core of client CPUs can usually beat server CPUs by quite a bit and with very fast network it could speed the process by a lot.

Reply
0 Kudos