VMware Cloud Community
BillClarkCNB
Enthusiast
Enthusiast

Slow vMotion?

I have a 5-node cluster running ESXi 6.5 U3.  I have x6 1GB NICs in each host; 3 are for data, 2 for management, and 1 dedicated to vMotion.  Everything is functioning alright, but I think my vMotion tasks are slow.  Doing a "computer resource only" vMotion takes between a minute-fifteen, to a minute-fortyfive to finish.  Is that slow or am I expecting more?  I've checked that the MTU is set for 9000 on the vmkernel adapter and the physical switch ports that are used.  Is there something else to check that I'm missing or should I simply add more NICs to the server and set them up as multi-NIC vMotion?

0 Kudos
3 Replies
scott28tt
VMware Employee
VMware Employee

@BillClarkCNB 

Moderator: Moved to vMotion & Resource Management Discussions


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
a_p_
Leadership
Leadership

The time that's required for live migrating a VM mainly depends on the VM's memory, and the vMotion network speed.
To find out whether there's an issue with vMotion, open a command line session and run esxtop. Then press "n" to monitor networking. With a dedicated vMotion NIC, you should actually see that the NIC is fully utilized.

From a network design perspective, I'd suggest you do a little change. Assuming that you are using VLANs, you could combine Management, and vMotion on the same vSwitch, and configure Multi-NIC vMotion with the port groups configured as follows (assuming vmnic0 ... vmnic2):
- Management: vmnic0 Active, vmnic1+2 standby
- vMotion-1: vmnic0 unused, vmnic1 active, vmnic2 standby
- vMotion-2: vmnic0 unused, vmnic1 standby, vmnic2 active
The benefit of this is that the you double the vMotion speed, but still retain full redundancy for the Management network.

Btw. MTU 9000 on a vMotion network will not help much.

André

0 Kudos
BillClarkCNB
Enthusiast
Enthusiast

So I did a couple test vMotion tasks and here's what I saw in the esxtop (Counter - vMotion FROM - vMotion TO)

PKTTx/s - 3367 - 9113

MbTX/s - 952 - 4.57

PKTRx/s - 8887 - 14871

MbRX/s - 4.54 - 970

If I'm looking at that right, I'd say that with overhead, the 1Gb vMotion NIC is fully saturated and that would be the bulk of my "slowness".  I probably need to take your advice and either combine my current vMotion and Management ports or add another 2-port NIC card and dedicate the extra ports to vMotion.

0 Kudos