VMware Cloud Community
BHagenSPI
Enthusiast
Enthusiast

Issues moving cluster to new datacenter?

We need to move 6 esxi 6.7u1 hosts to a new geographic location for DR purposes.

Currently in vcenter, we have a single datacenter with 2 clusters; each cluster has 6 physical hosts.

I'm thinking I need to create a new datacenter and cluster in vsphere for the DR site, with all new dvSwitches, port groups, and vmkernal ports, with IPs that work at the DR site. Correct?

Then, what's the best way to move the hosts to the new datacenter and cluster in vsphere?

I'm thinking I need to disconnect the hosts from the current cluster, change the IPs on the physical esxi hosts, then and add those to the new cluster. Correct?

(Next, I've been told I need to spin up a new instance of VCSA on the DR cluster, but I'll ask questions about that in the other thread I have going for that.)

UPDATE: I failed to mention that we are running VSAN on these hosts; each set of 6 hosts create a single VSAN datastore; so we'll be moving the DR datastore to the new DR cluster as well. Any hints about that would be great as well!

0 Kudos
7 Replies
daphnissov
Immortal
Immortal

You need to move them to a separate vCenter, which then negates the need for instruction on creating a new data center object with new cluster. Assuming there are no VMs being moved with these on the vSAN datastore, you have the latitude to reconfigure at the DR site fairly easily.

0 Kudos
BHagenSPI
Enthusiast
Enthusiast

Thanks. But some of my questions are still valid, even if I create a new vCenter:

I'll (of course) have to create all new dvSwitches, port groups, and vmkernal ports, with IPs that work at the DR site in the new vCenter instance.

Still wondering the best way to move the hosts to the new datacenter and cluster in vsphere: Disconnect the hosts from the current cluster, change the IPs on the physical esxi hosts, then and add those to the new cluster?

We have a ton of replicas on the DR stack (we did the initial seed replication while the DR stack was in the same rack, to take advantage of our 10GB connection) so there's no chance of re-creating the vsan.

0 Kudos
anvanster
Enthusiast
Enthusiast

Hi,

Well, first rule would be to have a backup of everything you're running. Veeam Community edition is a good solution for this.

Next, vSAN is a quite robust system and tends to recover quite well, moving to another cluster should not be a problem.

Steps to perform:

  • Deploy a new vCenter Server and create a vSphere Cluster
  • Enable VSAN on the cluster.
  • Install VSAN license and associate to cluster
  • Disconnect one of the ESXi hosts from your existing VSAN Cluster
  • Add previously disconnected Host to the new VSAN Cluster on your new vCenter Server.
    • You will get a warning within the VSAN Configuration page stating there is a “Misconfiguration detected”. this is normal due to the ESXi not being able to communicate with the other hosts in the cluster it was configured with.
  • Add the rest of the ESXi hosts.
  • After all the ESXi are added back the warning should disappear.

Try to make these steps within an hour, i think the default setting for vSAN to start rebuild kick-off is 1-hour. If you have all hosts moved to a new cluster promptly you should be up and running by end-of-day.

Always have your backups handy. Veeam + FreeNAS is a good low cost solution.

0 Kudos
BHagenSPI
Enthusiast
Enthusiast

@anvanster, you're mostly correct, but I just keep finding more steps that I need to take to make this happen.

We're trying to move from this:

vcenterserver1

-datacenter1

--cluster1

---6x esxi hosts with vsan 6.6 datastore

--DR cluster2

---6x esxi hosts with vsan 6.6 datastore

To this:

vcenterserver1

-datacenter1

--cluster1

---6x esxi hosts with vsan 6.6 datastore

vcenterserver2

-DR datacenter1

--DR cluster1

---6x esxi hosts with vsan 6.6 datastore

I've found the following articles:

First, How to move ESXi/ESX host from one vCenter Server to another (1004775) says the following:

VMware recommends you to not migrate ESX\ESXi hosts with distributed switches. See these articles to migrate the host to standard switches prior to migrating them to a new vCenter Server:

Next, I found the following: Moving a vSAN cluster from one vCenter Server to another (2151610)

  • Note: For vSAN 6.6 only: Run command to ignore member update: esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
  • We have vSAN 6.6.  Anybody know where exactly I'm supposed to run this command?

Last, I converged my vcsa+external PSC to a vsca+embedded PSC, as is now recommended by vmware (which took a 4h15m support call). I then created a new vcsa. I created a new datacenter and cluster. But the vcsa still lives on a host that's in the current datacenter and cluster. Still trying to figure out how I'm going to move that to the new datacenter.

0 Kudos
BHagenSPI
Enthusiast
Enthusiast

Next, I found the following: Moving a vSAN cluster from one vCenter Server to another (2151610)

  • Note: For vSAN 6.6 only: Run command to ignore member update: esxcfg-advcfg -s 1 /VSAN/IgnoreClusterMemberListUpdates
  • We have vSAN 6.6.  Anybody know where exactly I'm supposed to run this command?

Well, I just found the answer to this question anyway!

...applied to all ESXi hosts PRIOR to disconnecting from the original vCenter Server and adding them into the new vCenter Server...

(Why couldn't I have found that article 2 weeks ago??) 🙂

0 Kudos
BHagenSPI
Enthusiast
Enthusiast

I don't know if my specific circumstance is making me confused, but I can't seem to get the hang of this. Here's what I need to accomplish...need to go from Scenario A to Scenario B:

vcsa-move.png

Scenario A:

A while ago, I created the production hosts, vsan, networking, everything...all our VMs are on the hosts, all is humming along nicely.

Then I bought 6 more hosts, built them and configured them identical to the first 6, created their own vsan and vsan-enabled cluster...physically at the same site, and under the same vCenterServer. The plan is to move them to our DR site after building them and using Veeam to replicate all the vm's to the new stack, because we might as well build them at the prod site and use a 10GB link between physical switches to seed all the replicas, rather than wait a year to do it over the 200MB L2 link to our DR site.

Now, all VMs are replicated. DR Hosts are physically moved to the DR site. Physical networking over the L2 tunnel is done and working. Veeam replication (incrementals) is working quite nicely.

But wait! All the physical and virtual networking is still pointing to gateways in the prod site; veeam server and the vcenterserver01 are also in the prod stack. If the prod site goes down, we have a mess.

So...Scenario B:

I need to create a new DRvCenterServer, at the DR site, and move all the DR hosts from the DR Cluster on the left to the new DRvCenterServer on the right.

My first, basic question is: Where do I create the new vCenter Server??

     If I create it in the dr cluster (on the left), then I can't migrate it to B, because when I move the hosts, won't the vcsa go down, and then I won't be able to get back into it?

     If I create it in the prod cluster (on the left), I won't be able to migrate it to the DR hosts after connecting the dr hosts to the new datacenter

*sigh*

I'm confused...thank you for your patience. 🙂

PS

I did read https://www.virtuallyghetto.com/2014/09/how-to-move-a-vsan-cluster-from-one-vcenter-server-to-anothe... and the post he refers to here Re: VSAN: swapping out old vCenter (Server A) with new vCenter (Server B) ; but these aren't quite addressing what I'm trying to do.

0 Kudos
daphnissov
Immortal
Immortal

Firstly, I want to ask how many VMs are we talking about replicating to this secondary vSAN cluster? And how much storage does this represent (in GB or TB)?

Next

My first, basic question is: Where do I create the new vCenter Server??

     If I create it in the dr cluster (on the left), then I can't migrate it to B, because when I move the hosts, won't the vcsa go down, and then I won't be able to get back into it?

     If I create it in the prod cluster (on the left), I won't be able to migrate it to the DR hosts after connecting the dr hosts to the new datacenter

Your problem here is you want to lift-and-shift but have no existing infra at your DR site. So, with that being the case, I'd have two proposals.

  1. Bootstrap a temporary host at the DR site and deploy the vCSA plus any core infra (AD, DNS, etc.). Your networking will obviously need to exist and be configured, but you have to have that anyway.
  2. Break off one host from your 6-host secondary cluster and move it to the DR site. You would reconfigure vSAN at the prod site first to remove from it this one cluster member. Next, you'd basically proceed as if it were option #1. Assuming this host has local-only storage, a drive (or drives, if you temporarily enact RAID to pool drive members) will need to be wiped first in order to lay down VMFS as it will not succeed if it sees vSAN partitions.

After deciding on either of these two choices, you can migrate your vSAN cluster over to the DR site and point it to this new vCenter as described in the articles you provide. If you went with option #1, you can svMotion the vCSA from the standalone host to your relocated vSAN cluster and all is well. If you went with option #2, you would do the same thing, and once that single host is vacant you would reverse the process to bring it back into the cluster at which point it would once again contribute its storage to the vSAN datastore.

0 Kudos