VMware Cloud Community
GAPCABIV
Enthusiast
Enthusiast

Moving VMs to separate vCenter & Cluster

Hi All.

I think I have my process down, but just wanted to run it by a few eyes to see if anyone sees a flaw in my plan.  For the record, I know that we are using no longer supported versions of vCenter and vSphere but the ultimate end goal is to get onto a supported platform.  So today we have 2 separate vCenters, 1 for our server platform and one for VDI.  Over the years though some servers have been spun up on the VDI hosts and tonight is the night I will be moving them out of the VDI environment and into the server environment where they belong.  This is so that we can get started on our VDI environment upgrade to 6.5 on a shiny new Nutanix platform.

Today both vCenter and vSphere environments are 5.1.  VDI is U2 and servers is U3.

I have configured the networking on the server platform so that the VDI vlan can be used.

I will zone the storage that is currently only visible to the VDI platform this evening so that it is visible to the server platform.

With the storage and vlan available now to both environments, my plan (at least my hope) is that I can just shutdown the server from the VDI vCenter and remove the VMs from inventory.

Then hop over to the Server vCenter side of things, browse the datastore and add the VMs to inventory.  Power them up and if asked, say "I moved it" and everything will hopefully be golden.

Is it really going to be that simple?  I hope so but that is why I am asking here.

PS, just to throw 1 other wrench into things.  There are 2 VMs I will be doing this for that have physical RDMs.  The plan for them is to present those RDM volumes from the storage to the server hosts while keeping them zoned to the VDI hosts as well.  Because I cannot keep the parts of these 2 VMs that are not RDMs on the same volume that they are on today, I will need to create a new datastore and present it to the hosts in both vCenters and then do an "Advanced" SvMotion moving the to the new datastore but ensuring that the disk format is kept as "Same Format As Source" otherwise my RDMs will become vmdks and when we are talking about MSCS, that would not be a good thing.

0 Kudos
2 Replies
daphnissov
Immortal
Immortal

This would, in theory, work, but I would advise you do this a little differently.

  1. Create a separate "swing" datastore that can be presented to your VDI environment as well as the new vSphere 6.5 environment. Don't expose every datastore you have. That often sets users up for failure if something happens.
  2. Ensure that the VMs you wish to move are brought up to the highest VM hardware level or, at the least, VM hardware version 7. If you have any 4s out there they need to be upgraded, but be aware VMware Tools must be done first.
  3. Ensure you have no local devices or ISOs attached to those VMs being moved.
  4. Keep in mind when you move them that any backup, monitoring, or other things will need to be reset and all that data effectively gets wiped.
  5. I encourage you to read my article which discusses some of this strategy and important caveats here.
0 Kudos
GAPCABIV
Enthusiast
Enthusiast

So I was able to accomplish several of the servers I needed to migrate last night but due to server patching night being last night (to include vCenter and our SQL servers I was not able to do everything.  Certainly got the easy stuff out of the way though which were all the basic VMs with not RDMs or anything.  Did the cut over exactly like I described in my opening post and it was easy as cake.

Next week I will do the 5 remaining servers.  1 of those is a pretty basic one just like the ones last night.  2 of them have physical RDMs that I don't think will be an issue.  If necessary, I will just remove the RDM pointers from the machine configs and recreate them.

The last 2 I think are going to be a challenge.  2 MS Clustered Servers that share 4 physical RDMs between them.  My understanding is that with vSphere 5.5+ the LUN ID is not a big deal but in 5.1 and lower (Remember we are on 5.1 today) the LUN ID must match what it is in the current environment for physical RDMs used by MS Clustered servers in vSphere.

I don't think it is going to be possible to achieve that as one of the LUN IDs on the destination vSphere environment is in use for another volume that is presented from our VNX.  When I did a test attach of 1 of the volumes used as a physical RDM by this MS Cluster where it has LUN ID 12 on the source environment it was assigned LUN ID 252 on the destination.

Is there any way to migrate this cluster over without breaking the cluster, disconnecting all of the RDMs and migrating both nodes and recreating the RDMs and Cluster?

0 Kudos