VMware Cloud Community
Nlecert
Contributor
Contributor

Vmware Converter Standalone 6.2 capped at 1Gb/s ?

Hi,

I'm currently running some performance tests with Vmware converter for Hyper-V conversion.

The converter software is directly installed on my Hyper-V Host running W2012 R2, and the Network Card is an Intel X550-T 10Gbps adapter.

My ESXi host is in version 6.5.0 and also equipped with an Intel X550-T 10Gbps adapter.

I'm trying to convert a powered off Hyper-v VM stored on my Hyper-V datastore to my Vmware ESXi, and the job never go higher than 800-850 Mbps.

I already desactivated the encryption and restarted my converter worker service, and changed the number of data connexions per task, without significant improvement.

(see this Kb VMware Knowledge Base )

The job is in block level cloning, and the 2 servers are connected to the same switch, connected at 10Gbps.

I checked with windows explorer copies, I can see my network card going around 3.2 Gbps, so normally the converter should go around there also.

So, is VMWare converter capped at 1Gbps ?

How can I check if there is a "bottleneck" on my network that I missed ?

Have a good day.

Nicolas.

Reply
0 Kudos
7 Replies
patanassov
VMware Employee
VMware Employee

Hi

Converter is not capped. Looking for the bottleneck can be tricky. It could even be a disk, not the network, though not so likely.

Have in mind Converter uses NFC to write data. NFC usually uses ESXi's management network. E.g. if management network is uplinked to a slower card...

(On the Hyper-V side, the traffic uses the network the source machine runs on; you have tested it, so it should not be the problem).

But it could be something else, it's hard to tell.

Regards,

Plamen

Reply
0 Kudos
Nlecert
Contributor
Contributor

Hi,

Thanks for you post.

I ran another test :

I started two conversion job, 2 Vms from the same Hyper-V Server, stored on the same datastore, converted both to the same ESX server/datastore.

Both jobs won't go higher than 650-800Mb/s (around 85MB/s) but the Lan cards of the HV and ESX servers both went up to 2Gb/s.

pastedImage_1.png

Here we can see my 2 tasks and the network card going at 2.1Gb/s.

pastedImage_0.png

Here the network card of my ESX, we can also see on the right, a first step at 1Gb/s and 2-3 minutes later a second step going above 2Gb/s.

So now i'm not sure, You tell me there is no network limitation on Vmware converter, but the problem isn't on my network either.

And I believe you because the software clearly can go higher than 1Gb/s by multitasking.

Maybe there is a tweak I missed for the conversion job I start. Do you have any advice how to configure the conversion tasks ?

Reply
0 Kudos
patanassov
VMware Employee
VMware Employee

Hi

Converter can throttle network traffic, jobs can be configured to do so. However the default is no throttling, I assume you haven't set it explicitly.

Parallel tasks are rather meant for disk I/O optimization, not so much for network.

Unfortunately I don't know what to suggest to optimize cloning (except excluding SSL encryption which you already have), I think you should focus on ESXi network performance tuning and/or disk IO tuning.

E.g. this section from https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/Perf_Best_Pr...  is relevant since Converter also uses NFC:

To migrate powered-off virtual machines and, in some cases, to migrate “cold” data (the base disk, and any snapshots other than the current one, in a virtual machine that has a snapshot), Cross-host Storage vMotion will use the Network File Copy (NFC) service. As in the case for powered-on virtual machines, the NFC service will similarly preferentially use VAAI or the source host’s storage interface. If neither of these approaches are possible, the NFC service will use the network designated for NFC traffic.

NOTE In-place upgrades of VMFS datastores with non-1MB block sizes to VMFS 5.x leave the block size unchanged. Thus in this situation it would be necessary to create a new VMFS 5.x datastore to obtain the performance benefits of 1MB block size. NOTE Prior to vSphere 6.0, NFC traffic could use only the management network (sometimes called the “provisioning network”). Starting with vSphere 6.0, NFC traffic still uses the management network by default, but can optionally be routed over a network dedicated to NFC traffic. This allows NFC traffic to be separated from management traffic, giving more flexibility in network provisioning. In any case, we recommend that NFC traffic be provided at least 1Gb/s of network bandwidth.

HTH,

Plamen

Reply
0 Kudos
Nlecert
Contributor
Contributor

Hi,

Ok I understand what you mean.

I looked on the ESX side, and modified some options on the management card (MTU / Negociation forced) ==> It doesn't affect the transfer.

I did a migration of some VMs hosted on this ESX, and I can see the network card using it's full capacities. And if I retry after that a VM conversion, I can see that it won't go higher than 1Gb/s.

pastedImage_5.png

So I'm still investigating but I'm not sure the problem is from my ESX.

Reply
0 Kudos
patanassov
VMware Employee
VMware Employee

Thank you for sharing this information. Unfortunately I can't suggesting anything specific now but I've logged a task to investigate the issue.

Regards,

Plamen

Reply
0 Kudos
Nlecert
Contributor
Contributor

Hi,

Thank you, I'm looking forward the results on VMWare's side.

If you want some further technical informations about my servers/eth network/san don't hesitate to ask.

I will still doing some investigation on my infrastructure.

Have a good day.

Regards,

Nicolas.

Fixitup77
Contributor
Contributor

We are experiencing the same issue I know this is an older thread but the same thing is still happening. Both Vcenters and the VM with the converter are on 10GIG interfaces but the converter is still only utilizing 1 gig. 

Thanks

 

Reply
0 Kudos