Hi,
If we have 2 datacenters with clusters in different States/Regions with the only thing in common is that they are both managed by the same VCSA (No shared storage and each on different networks for all services such as vMotion, vSAN, VM networks etc..),
What would be the FULL requirements to allow a VM to be compute & storage migrated between these DC's and do requirements change in case of storage vMotion vs cold migration?
....
I am more interested in the connectivity requirements and how would one make such migration technically possible, yes there will be other requirements such as EVC, licensing etc.. but the focus is on connectivity requirements as I know that vSphere supports long distance vMotion but could not find much detail about underlying requirements (Could be because this lies beyond vSphere)
Thanks
Start here: Migrate a Virtual Machine to a New Compute Resource and Storage in the vSphere Client
I think my book should answer most of these questions for you.
Taking a look....seems this is the scenario:
https://chipzoller.gitbook.io/vspheremigration/scenario-2-inter-cluster-vm-migrations/inter-c-04
these are the network requirements:
"Network connectivity – Source and destination hosts must be able to communicate between their vmkernel ports tagged for vMotion. The link speed should be 1 Gbps or better. If vMotion vmkernel network connectivity is not possible, then management vmkernels must communicate (cold migration only)."
WIll dig further to see if there is any information about possible options/examples on how to achieve the above vmkernel connectivity when clusters are in different datacenters but as I said this is likely out of scope for vSphere with regards to how you do it. Still I am interested to know more in different ways it can be achieved examples:
Stretched L2 between datacenters or L2 <-> L3 <-> L2 or VXLAN (not particularly familiar with the latter but from what I have seen it might be one way of achieving such connectivity)
Thanks
As you point out, this is outside the scope of vSphere's domain but rather a general network design concern. Whatever you end up doing to achieve that connectivity is up to you, but what matters is that, ultimately, those two vmkernel ports tagged for vMotion can intercommunicate.
Ok so key point is that the vmkernels (source/destination) need to be able to reach each other and which vmkernel is used varies by migration type:
vMotion - vMotion vmkernels
Cold migration - Management vmkernels (unless Provisioning vmkernel is set)
etc.. for FT, Replication and any additional services
What about compute+storage vmotion from a vSAN datastore cluster to a different vSAN datastore cluster - which vmkernels are used in this case? (as the vSAN vmkernels should only be able to talk to the Hosts in the same cluster which are forming the same vSAN datastore)
Thanks