Converter is not capped. Looking for the bottleneck can be tricky. It could even be a disk, not the network, though not so likely.
Have in mind Converter uses NFC to write data. NFC usually uses ESXi's management network. E.g. if management network is uplinked to a slower card...
(On the Hyper-V side, the traffic uses the network the source machine runs on; you have tested it, so it should not be the problem).
But it could be something else, it's hard to tell.
Thanks for you post.
I ran another test :
I started two conversion job, 2 Vms from the same Hyper-V Server, stored on the same datastore, converted both to the same ESX server/datastore.
Both jobs won't go higher than 650-800Mb/s (around 85MB/s) but the Lan cards of the HV and ESX servers both went up to 2Gb/s.
Here we can see my 2 tasks and the network card going at 2.1Gb/s.
Here the network card of my ESX, we can also see on the right, a first step at 1Gb/s and 2-3 minutes later a second step going above 2Gb/s.
So now i'm not sure, You tell me there is no network limitation on Vmware converter, but the problem isn't on my network either.
And I believe you because the software clearly can go higher than 1Gb/s by multitasking.
Maybe there is a tweak I missed for the conversion job I start. Do you have any advice how to configure the conversion tasks ?
Converter can throttle network traffic, jobs can be configured to do so. However the default is no throttling, I assume you haven't set it explicitly.
Parallel tasks are rather meant for disk I/O optimization, not so much for network.
Unfortunately I don't know what to suggest to optimize cloning (except excluding SSL encryption which you already have), I think you should focus on ESXi network performance tuning and/or disk IO tuning.
E.g. this section from https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/Perf_Best_Practices_vSphere65.pd… is relevant since Converter also uses NFC:
To migrate powered-off virtual machines and, in some cases, to migrate “cold” data (the base disk, and any snapshots other than the current one, in a virtual machine that has a snapshot), Cross-host Storage vMotion will use the Network File Copy (NFC) service. As in the case for powered-on virtual machines, the NFC service will similarly preferentially use VAAI or the source host’s storage interface. If neither of these approaches are possible, the NFC service will use the network designated for NFC traffic.
NOTE In-place upgrades of VMFS datastores with non-1MB block sizes to VMFS 5.x leave the block size unchanged. Thus in this situation it would be necessary to create a new VMFS 5.x datastore to obtain the performance benefits of 1MB block size. NOTE Prior to vSphere 6.0, NFC traffic could use only the management network (sometimes called the “provisioning network”). Starting with vSphere 6.0, NFC traffic still uses the management network by default, but can optionally be routed over a network dedicated to NFC traffic. This allows NFC traffic to be separated from management traffic, giving more flexibility in network provisioning. In any case, we recommend that NFC traffic be provided at least 1Gb/s of network bandwidth.
Ok I understand what you mean.
I looked on the ESX side, and modified some options on the management card (MTU / Negociation forced) ==> It doesn't affect the transfer.
I did a migration of some VMs hosted on this ESX, and I can see the network card using it's full capacities. And if I retry after that a VM conversion, I can see that it won't go higher than 1Gb/s.
So I'm still investigating but I'm not sure the problem is from my ESX.
Thank you for sharing this information. Unfortunately I can't suggesting anything specific now but I've logged a task to investigate the issue.
Thank you, I'm looking forward the results on VMWare's side.
If you want some further technical informations about my servers/eth network/san don't hesitate to ask.
I will still doing some investigation on my infrastructure.
Have a good day.