VMware Cloud Community
richard6121
Contributor
Contributor

Datacenter Relocation - Hosts stay but the storage moves...

Hi all,

I'd like to see if anyone has suggestions on the best way to handle this project.

We are relocating a portion of our datacenter to a new city.  Both the old and the new datacenters have existing ESX 5 clusters.  The ESX hosts will not move.  What will move is the primary storage.  We're packing up an entire NAS and shipping it the new location.  At this new location, we want to use the pre-existing hosts to spin up the VMs on said storage.  The old location has a small NAS which will stay behind.

What is the best way to orchestrate the VM moves within vCenter so that I don't end up with duplicate VM registrations or VMs unable to see their disks without manual reconfiguration?

I have already storage vMotioned the VMs to their appropriate locations -- the stay-behind VMs are on the smaller stay-behind NAS and the to-be-moved VMs are on the to-be-moved NAS.

Make sense?  What are the next steps?

Thanks for any suggestions!

Tags (2)
0 Kudos
4 Replies
a_p_
Leadership
Leadership

What is the best way to orchestrate the VM moves within vCenter so that I  don't end up with duplicate VM registrations or VMs unable to see their  disks without manual reconfiguration?

If the two datacenters don't "see" each other you may just remove the VMs from the old vCenter Server's inventory and register them again on the new vCenter Server. Depending on the number of VM's you can do this either manually using the vSphere Client (removing by right clicking the VM, adding by right clicking the .vmx in the datastore browser) or by using a script. If you prefer a script, I suggest you search the VMware vSphere™ PowerCLI forum for examples.

André

0 Kudos
richard6121
Contributor
Contributor

The two datacenters are currently under management by a single vCenter installation.

They do have connectivity to each other but the link is too slow to perform over-the-wire migrations.  We've already moved a few of them this way, in fact.

0 Kudos
a_p_
Leadership
Leadership

Another option could be to temporarily setup a "shipping host" which moves with the storage. No need for a high performance system as you could just migrate the VM's to and from this shipping host while they are powered off.

André

richard6121
Contributor
Contributor

The shipping host idea is excellent.  We are running identical blade chassis in both locations, so this would be quite easy.  I'd need only to re-IP the blade, fix all the VM networks, and get it connected to the storage target's new IP.

Question... We are using IP address and not DNS name to connect to NFS storage.  When I point the ESXi host at the new IP address, will the host think that I'm working with a new datastore and assign a different guid or something?  Last time I moved some storage to a new IP address, none of the VM's could find their disk since a guid-like path was referenced in the vmx files.  I had to manually remove and re-add the disks from all of the VMs.

Why IP and not DNS name?  Once during an outage when no DNS was available, our hosts couldn't find the storage and it created a nasty catch-22 when trying to restore services.  Should we quit doing this and go back to DNS name?

0 Kudos