VMware Cloud Community
BillClarkCNB
Enthusiast
Enthusiast
Jump to solution

vMotion speed issues?

ESXi7.0.3l on 5 different hosts all connected to a solid-state, fiber-channel SAN.  Cisco switch stack with GB ports for the ethernet connections.  Each host has a dedicated GB ethernet port, virtual switch and vlan for vMotion.  My issue is that I think migrating a guest VM from one host to the other (computer resources only) is slow.  It takes several minutes for a single guest VM to migrate.  If I do multiple at a time, it takes upwards of 5-8 minutes for them to complete.  Most of these guest VMs are basic Windows servers, 4-6 cores, 12GB ram and an attached drive or two.  When testing migration, the hosts have plenty of resources available.  Is the slowness due to a single GB port doing all the heavy lifting?  Do I need to add another port (or two) for vMotion to use to make it perform better?  Can I combine vMotion and Management ports together?  I'm confused as to this slowness and I don't see the dedicated port maxing out the bandwidth, but I could be wrong there too and just not seeing the correct numbers.

Reply
0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership
Jump to solution

In order to get the live Rx/Tx rates, connect to one of the hosts' console (e.g. via SSH), and run esxtop, which shows you live data for the network after pressing "n" for networking.

With a singe gigabit port, I'd expect about 2 - 2.5 min for the migration of ta VM with 12 GB RAM.
If you do have additional, unused network ports, consider to setup multi-nic vMotion.

André

View solution in original post

8 Replies
a_p_
Leadership
Leadership
Jump to solution

In order to get the live Rx/Tx rates, connect to one of the hosts' console (e.g. via SSH), and run esxtop, which shows you live data for the network after pressing "n" for networking.

With a singe gigabit port, I'd expect about 2 - 2.5 min for the migration of ta VM with 12 GB RAM.
If you do have additional, unused network ports, consider to setup multi-nic vMotion.

André

crmercado
Enthusiast
Enthusiast
Jump to solution

Hi, are your management ports 10Gbps?

In this case I suggest you try to move the vMotion service to these ports and try to perform the migration. It is likely that having only one 1Gbps port is affecting vMotion.

This document details each component involved when using vMotion.

https://frankdenneman.nl/2012/12/04/calculating-the-bandwidth-usage-and-duration-of-a-vmotion-proces...

Reply
0 Kudos
BillClarkCNB
Enthusiast
Enthusiast
Jump to solution

Sadly, we have no 10GB in our environment.

Reply
0 Kudos
BillClarkCNB
Enthusiast
Enthusiast
Jump to solution

I had looked at that before, and if I remember correctly it was less than half of the available bandwidth it should be using.  This has been a while ago so I will test this again and see if this still holds true or if my memory is faulty.  Looking into this more, I've come across a VMware Blog about Migration Tuning and it was mentioned to adjust this advanced setting on each host:  Migrate.VMotionStreamHelpers.  The blog states that by default that is set to 0, and when activated it opens a single "thread" per vMotion IP address and that this can be adjusted to increase the number of "threads" used.  Any experience or knowledge of this setting?

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Adjusting the number of streams is/was meant for fast networks (10gbps or more) to spread the workload.
I don't see a benefit for a 1 gbps vMotion network by adjusting stream. With 1 gbps you will definitely benefit more from multi-nic vMotion.

That said, please note that adjusting streams is not necessary (and from what I understand even not recommended) anymore with vSphere 7.0 Update 2 and later. See e.g https://core.vmware.com/blog/faster-vmotion-makes-balancing-workloads-invisible

André

Reply
0 Kudos
crmercado
Enthusiast
Enthusiast
Jump to solution

If possible add more NICs for vMotion, this will benefit your vMotion performance.

Reply
0 Kudos
BillClarkCNB
Enthusiast
Enthusiast
Jump to solution

According to ESXTOP, it appears to be operating correctly.  During a vMotion test(s) of a basic server, I'm seeing MbRX/s values between 945 and 960 on the single vMotion NIC.  I'd say that is utilizing what it can and the only way to really improve is to add more physical connections.

crmercado
Enthusiast
Enthusiast
Jump to solution

Exactly, unfortunately, the solution would be to give it more bandwidth by adding another NIC.

Reply
0 Kudos