VMware Cloud Community
BenJonesDeluxe
Contributor
Contributor

Consolidating multiple clusters, on various ESXi versions, with various storage, over various protocols!?

We have 4 vSphere clusters, each with their own storage.

1) 4 Host 6.0 cluster, with 11TB of all-flash Tintri storage over NFS via 10GB SFP+ Copper.

2) 3 Host 4.1 cluster, with 27TB of storage over 2 Equallogic crates via 1GB copper and 10GB SFP+ Copper respectively.

3) 4 Host 4.1 cluster, with 8TB of storage over 2 Infortrend crates via fiber channel.

4) 4 Host 4.1 cluster, with 14TB of storage over 2 HP P2000s via fiber channel.

The plan is to add a 5th Host to cluster 1, along with 50TB of storage, probably on 10GB SFP+ Copper via NFS and migrate the VMs from the other 3 clusters onto those hosts and storage.

As for migrating the VMs, I think this is our easiest option, but hoping to get some opinion/suggestions:

- Ensure all VM Port Groups from clusters 2,3 and 4 are trunked and replicated on cluster 1's hosts.

- Open firewall rules to allow Cluster 1's hosts to see and mount Cluster 2's storage.

          - power off all VMs on cluster 2. Remove them from Cluster 2's inventory.

          - add them to Cluster 1's inventory.

          - Storage vMotion them to new Datastores provisioned on the 50TB additional storage

- All a fiber channel card to the new Host 5 in cluster 1.

          - zone that card into the storage fabric of cluster 3 and present the disks, so it can see the Infortrend storage

          - power off all VMs on cluster 4, remove them from Cluster 2's inventory

          - add them to Cluster 1's inventory

          - Storage vMotion them to the new Datastores

Repeat for cluster 4.

We can afford the down time, but is this the easiest way?

Thanks in advance!


Ben

Tags (1)
3 Replies
daphnissov
Immortal
Immortal

As for migrating the VMs, I think this is our easiest option, but hoping to get some opinion/suggestions:

- Ensure all VM Port Groups from clusters 2,3 and 4 are trunked and replicated on cluster 1's hosts.

- Open firewall rules to allow Cluster 1's hosts to see and mount Cluster 2's storage.

          - power off all VMs on cluster 2. Remove them from Cluster 2's inventory.

          - add them to Cluster 1's inventory.

          - Storage vMotion them to new Datastores provisioned on the 50TB additional storage

- All a fiber channel card to the new Host 5 in cluster 1.

          - zone that card into the storage fabric of cluster 3 and present the disks, so it can see the Infortrend storage

          - power off all VMs on cluster 4, remove them from Cluster 2's inventory

          - add them to Cluster 1's inventory

          - Storage vMotion them to the new Datastores

Repeat for cluster 4.

We can afford the down time, but is this the easiest way?

This is probably your only way short of you upgrading those other clusters from 4.1 temporarily.

What you propose is what I'd do, but here are some things to check and be cautious of beforehand:

  1. On the clusters which contains VMs that will be migrated to Cluster 1, ensure you have full backups of every VM before proceeding.
  2. May only want to mount storage to one host in Cluster 1 to facilitate migrations.
  3. Further to #2, ensure the "old" storage you're about to present is checked to make sure it will not freak out when attached to vastly newer hosts. This is somewhat of a more open-ended caveat. Some older block storage microcode behave oddly (and, in some cases dangerously) when shown newer ESXi hosts because of how VAAI changed over the years and other metadata operations. Don't put yourself in a position that you may cause a storage outage because of these incompatibilities. You might want to raise an SR with the respective storage vendor (provided you have support, obviously) just to quickly run it by them and see if there are any internal PRs which flag this as a bad idea. It's being overly cautious, yes, but better safe than sorry.
BenJonesDeluxe
Contributor
Contributor

Thanks for the reply! Glad I'm moving in the right direction!

Ok, how about this for a slight refinement..


All the clusters can see the 50TB storage and their own storage.

I install a Veeam Backup VM on each cluster, to make use of FastSCP and use that to transfer the turned off VMs from the old datastores to new datastores on the new 50TB crate.

Then, the new cluster doesn't see any of the old storage hardware directly.

0 Kudos
daphnissov
Immortal
Immortal

If the common shared storage is over NFS, you're usually much safer than if it's block. The Veeam option could work as well, although even without sharing storage you could still use the Quick Migrate feature assuming management vmkernels are reachable.