VMware Cloud Community
jasmeetsinghsur
Enthusiast
Enthusiast

How to control the outbound traffic for vMotion to not saturate the link.

Hi,

We have an old ESXi host with 4 1G uplinks for vMotion and have a new ESXi host connected to the Nexus switch with 10G uplink. We are planning to migrate the virtual machines from old esxi to new esxi over vMotion Network with Multi-NIC configured to increase the transmission rate, however, we want to limit the vMotion traffic to the switch so that vMotion could not saturate the link. Could you please recommend any solution here as we have no enterprise license to use DvS to enable NIOC.

The switch we are using is Nexus 3000 Series.

Reply
0 Kudos
7 Replies
scott28tt
VMware Employee
VMware Employee

Are the old and new hosts CPU compatible for vMotion?

Your lack of Enterprise license also means you cannot have DRS with EVC.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
jasmeetsinghsur
Enthusiast
Enthusiast

Yes, both the hosts meet the CPU compatibility. We are concern for vMotion traffic over Multi NIC vMotion from Old esxi to new esxi. There are 4 uplinks dedicated to vMotion traffic only. Will the traffic saturate the link ?

Reply
0 Kudos
daphnissov
Immortal
Immortal

You want to saturate the links with vMotion traffic so that those migrations (and host evacuations) occur faster. If that's something you want to limit, remove some of the pNICs you've given it. But otherwise without NIOC, you cannot provide any such shaping at the ESXi level.

Reply
0 Kudos
jasmeetsinghsur
Enthusiast
Enthusiast

I think you got me wrong here. Let me make it more clear

We have 2 ESXi hosts with VM's running, each with 4 1G uplink dedicated for vMotion under Multi-NIC configuration increasing the transmission rate for vMotion traffic. On the other hand we have 2 new ESXi hosts with 4 10G uplinks for VM, NFS, management, vMotion traffic having all in separate VLAN's. All 4 ESXi are connected to the Nexus switch & meet the CPU compatibility.  We are planning to migrate the VM's from the old cluster to new cluster over vMotion without shared storage. Can the vMotion traffic flow through the 4 1G uplinks saturate the link or is there any impact to network performance?

Reply
0 Kudos
daphnissov
Immortal
Immortal

As long as there is L2 or L3 connectivity between old and new hosts across the vMotion vmkernel ports, they can communicate. As to if the vMotion traffic will completely saturate the 4 x 1 GbE interfaces, no way to know that unless you try. Without shared storage, the first step in the vMotion process will be to migrate the contents of the disk. Depending on how well your underlying storage performs, this may or may not saturate any of those links. The last stage is to move the contents of the memory, which has a better chance of saturating the links. But, regardless, since you have these all plugged into a Nexus 10 GbE switch, I'm not exactly sure why you're concerned here.

Reply
0 Kudos
jasmeetsinghsur
Enthusiast
Enthusiast

As long we use a dedicated link/ pipe for traffic to flow, it wont be a problem for other traffic flow. It is only when we have a single link/ pipe to allow multiple traffic to flow, we have to configure NIOC or define QoS?  Please correct me or provide any suggestion.

Reply
0 Kudos
daphnissov
Immortal
Immortal

But you've got only two hosts, and each source host has four 1 GbE interfaces. Your destination hosts are 10 GbE capable. So even if, somehow, one source host does max out four 1 GbE uplinks, because your 10 GbE links are consolidated, it won't get close to saturating them. Not to mention that this is only for migration purposes, so would be short lived. Once the migrations are done, you don't have anything to worry about.

But, to answer your question, yes, if you want to control traffic on egress/ingress, you need NIOC which means you need vDS. I don't think you have a lot to be concerned about, but try a test VM in your environment to see how it works.