VMware Cloud Community
jmontgomery2
Enthusiast
Enthusiast

Migrating Virtual Machines from a ESXi Host with Distributed Switch to an ESXi Host with Standard SW

Basically I have a vCenter 6.5 environment.  There is a Datacenter with a cluster and configured with Distributed Switches.  I am attempting to move the last Virtual Machine off that Cluster to where it is supposed to go in another cluster....We have Ent Plus licenses.  So I was attempting to Migrate Compute and Storage Resources....I get to the Networking portion....and Source and Destination Network are there but then it gives me a Compatibility Issue:

"Currently connected network interface: Network Adapter 1 cannot use network "X.X.X.X" because the type of the destination network is not supported for vMotion based on the Source Network type".

Reply
0 Kudos
8 Replies
Kinnison
Expert
Expert

Hi,


As far as I know "live" migration of a virtual machine where the source networking sees an object involving a vDS and as destination sees an object involving a vSS is not possible, a "cold" migration might produce a warning but might let you proceed. In my opinion, if possible, it would be better to move the networking of your VM to a portgroup that involves a vSS object (maybe you have one available) on the source host and then migrate the aforementioned VM to its destination host.


Be aware that assigning a VM's network to a portgroup involving a randomly chosen vSS object would lose network connectivity, so depending on the circumstances what is the best approach in your specific case can only be evaluated by you.


Regards,
Ferdinando

Reply
0 Kudos
jmontgomery2
Enthusiast
Enthusiast

Spoiler
Yea, that is kind of the issue.....the whole environment where it currently resides is associated with Distributed Switch and they Destination environment doesn't have any Distributed Switches....
Reply
0 Kudos
Kinnison
Expert
Expert

Hi,


Sorry, if this is the situation how do you describe all the other virtual machines how did you move them, all of them "cold" or manually?


If for many reasons you cannot afford an unavailability of the services provided by the VM in question, you could always recover an "uplink" from the existing "source" vDS and use it to create a dedicated vSS object with a portgroup consistent with the destination hosts. At that point you move the networking of your virtual machine to the Portgroup related to the said vSS and then migrate it "live", reducing the duration of the planned downtime. This assumes that besides the vDS object no LAG is used as well or more complex settings are at play.


I can't go any further, saying is very easy but doing is sometimes much less so.


Regards,
Ferdinando

Reply
0 Kudos
jmontgomery2
Enthusiast
Enthusiast

About 99% of the Migrations so far have been straight forward.  Basically separating two companies and doing the best we can to isolate.  I have Distributed Switches configured in my environment..so migrating has been pretty straight forward....but the other environment is older hardware and a Nutanix stuck at 6.5 for now and they don't have the resources to setup a Distributed Switch on their side....I think a cold Migration will work....because from their ESXi Hosts I can ping the network without issues....its just not allowing the live migration from Distributed to Standard Switch.  

Reply
0 Kudos
Kinnison
Expert
Expert

Hi


Understood, it seemed strange to me that all the other VMs had been moved manually but I was curious. 😀


Regards,
Ferdinando

Reply
0 Kudos
jmontgomery2
Enthusiast
Enthusiast

Worst part is I ran a test yesterday.  To Cold Migrate a 110 GB Virtual Machine took 7 hours.  The one that needs to move is 2 TB......Gonna have to go back to the drawing board with that one to figure out a better process to get it moved...as that would require way to much down time to be acceptable.

Reply
0 Kudos
Kinnison
Expert
Expert

Hi,


From what you say I assume you want to attempt a migration via a "remote connection" quite slow, otherwise such a long time could not be explained.


In my humble opinion and in my view I wouldn't even try to perform a vMotion operation with a virtual machine with the disk size you indicated, not even the smallest one, if something goes wrong during the migration then you end up with a pot full of trouble.


Regards,
Ferdinando

a_p_
Leadership
Leadership

Although this does not really explain why it took 7 hours, please note that cold migration uses the NFC protocol by default, which is not the fastest one. See e.g. https://core.vmware.com/resource/vsphere-vmotion-unified-data-transport

So if it's not related to bandwidth, consider to temporarily configure a vSS on the source host, to allow vMotion migrations.

André