We have 4 vSphere clusters, each with their own storage.
1) 4 Host 6.0 cluster, with 11TB of all-flash Tintri storage over NFS via 10GB SFP+ Copper.
2) 3 Host 4.1 cluster, with 27TB of storage over 2 Equallogic crates via 1GB copper and 10GB SFP+ Copper respectively.
3) 4 Host 4.1 cluster, with 8TB of storage over 2 Infortrend crates via fiber channel.
4) 4 Host 4.1 cluster, with 14TB of storage over 2 HP P2000s via fiber channel.
The plan is to add a 5th Host to cluster 1, along with 50TB of storage, probably on 10GB SFP+ Copper via NFS and migrate the VMs from the other 3 clusters onto those hosts and storage.
As for migrating the VMs, I think this is our easiest option, but hoping to get some opinion/suggestions:
- Ensure all VM Port Groups from clusters 2,3 and 4 are trunked and replicated on cluster 1's hosts.
- Open firewall rules to allow Cluster 1's hosts to see and mount Cluster 2's storage.
- power off all VMs on cluster 2. Remove them from Cluster 2's inventory.
- add them to Cluster 1's inventory.
- Storage vMotion them to new Datastores provisioned on the 50TB additional storage
- All a fiber channel card to the new Host 5 in cluster 1.
- zone that card into the storage fabric of cluster 3 and present the disks, so it can see the Infortrend storage
- power off all VMs on cluster 4, remove them from Cluster 2's inventory
- add them to Cluster 1's inventory
- Storage vMotion them to the new Datastores
Repeat for cluster 4.
We can afford the down time, but is this the easiest way?
Thanks in advance!
Ben
As for migrating the VMs, I think this is our easiest option, but hoping to get some opinion/suggestions:
- Ensure all VM Port Groups from clusters 2,3 and 4 are trunked and replicated on cluster 1's hosts.
- Open firewall rules to allow Cluster 1's hosts to see and mount Cluster 2's storage.
- power off all VMs on cluster 2. Remove them from Cluster 2's inventory.
- add them to Cluster 1's inventory.
- Storage vMotion them to new Datastores provisioned on the 50TB additional storage
- All a fiber channel card to the new Host 5 in cluster 1.
- zone that card into the storage fabric of cluster 3 and present the disks, so it can see the Infortrend storage
- power off all VMs on cluster 4, remove them from Cluster 2's inventory
- add them to Cluster 1's inventory
- Storage vMotion them to the new Datastores
Repeat for cluster 4.
We can afford the down time, but is this the easiest way?
This is probably your only way short of you upgrading those other clusters from 4.1 temporarily.
What you propose is what I'd do, but here are some things to check and be cautious of beforehand:
Thanks for the reply! Glad I'm moving in the right direction!
Ok, how about this for a slight refinement..
All the clusters can see the 50TB storage and their own storage.
I install a Veeam Backup VM on each cluster, to make use of FastSCP and use that to transfer the turned off VMs from the old datastores to new datastores on the new 50TB crate.
Then, the new cluster doesn't see any of the old storage hardware directly.
If the common shared storage is over NFS, you're usually much safer than if it's block. The Veeam option could work as well, although even without sharing storage you could still use the Quick Migrate feature assuming management vmkernels are reachable.