VMware Cloud Community
Solidbrass
Enthusiast
Enthusiast

How can I remove performance caps from SCP on ESXi hosts?

I have several Mac Pro 6,1 systems running 6.7u3 fully patched and I'd really like to be able to SCP the contents of VM directories off of these servers at a decent speed.  My scenario is to copy the contents onto an encrypted APFS formatted SSD attached to a macOS machine so that I can transport VM's to another site.  There appears to be an artificial cap on SCP performance, as even on a complete unloaded 12 core Xeon machine, it is not possible to get anywhere near line speed on 10Gbe - really it is more like a bit under 1Gb/s.  I googled around and found occasional questions about this but no real satisfactory answer so I figured I'd try here.

As an alternative I tried the the download from server feature in Fusion Pro.  It is is completely broken for downloading macOS VM's.  It is glacially slow regardless of whether you're moving a macOS VM or Windows or Linux VM, so it does not solve my problem.  It also breaks macOS VM's so they no longer work if you then upload them back to an ESXi host.

For bonus points, I'd love to do this without converting thin provisioned disks to thick but I will definitely accept that hit if I can move 1000MB/s instead of somewhere around 100.

3 Replies
IRIX201110141
Champion
Champion

This will not work because the busybox within ESXi get only limited resources. Also the standard "copy" functions as scp will not perform very well on VMFS.  So if OVF export will not work i suggest that you present an NFS to ESXi and than use the VM clone in vCenter or vmkfstools -i in out on the command line to copy *.vmdk from the local Datastore to the NFS.

Regards,

Joerg

DanielJGF
Enthusiast
Enthusiast

There isn't any limitation in speed from part of VMWare. The reason behind slow SCP from/to ESXi over a WAN is a small receiving buffer. This limitation is also present in other SCP/SSH implementations.

As SCP/SSH works over TCP, every time the remote buffer fills, it sends an acknowledgement packet to the sending end. The sending end will not send any new data until the acknowledgement is received. As latency increases the very nature of the TCP protocol combined with such small buffer will result in effective bandwidth being drastically affected.

We have written a post on improving effective SCP/SSH speed in VMWare ESXi  along with some easy to implement workarounds

As Joerg pointed out mounting the target volume over NFS could be a possible workaround, but whether it works or it is even possible will depend on your network topology.

 

Reply
0 Kudos
andersonincorp
Contributor
Contributor

For those who need it and future myself. So you got a shiny new server with ESXi and want to upload your *.vmdk or *.iso.

1) Use ESXi Host Client / scp and get disappointed. I got it to hang dead on web UI after 3 hours of upload, and max about 1.6 MB/s with scp.

2) Solution? Server your files with http server, and make your ESXi download them with wget, I got about 40MB/s with my setup using self-signed SSL + nginx at a home server, and downloading it directly to vmfs datastore.

$ ssh ... # login via ssh
$ esxcli network firewall set --enabled false # Disable firewall, optional, use if wget hangs dead instead of downloading
$ wget --no-check-certificate -O /vmfs/volumes/datastore1/whatever.vmdk https://your_home_server/path_to/whatever.vmdk
$ esxcli network firewall set --enabled true

Some thoughts, I really think VMWare is acting ignorant and probably forcing users to use all of their ecosystem with move/backup, etc. features, and explicitly made web ui/scp dead for large file transfers. According to my search, this problem existed back in 2012 and till now, without any official answer from VMWare, this is clearly some sort of non-consumer-friendly strategy that we have to swallow.

Reply
0 Kudos