VMware Cloud Community
andvm
Hot Shot
Hot Shot

migrations between different DC's

Hi,

If we have 2 datacenters with clusters in different States/Regions with the only thing in common is that they are both managed by the same VCSA (No shared storage and each on different networks for all services such as vMotion, vSAN, VM networks etc..),

What would be the FULL requirements to allow a VM to be compute & storage migrated between these DC's and do requirements change in case of storage vMotion vs cold migration?

  • Layer 2 connectivity between DC's?
  • Layer 3 connectivity (vMotion on separate TCP/IP stack)?
  • vMotion network in same network/VLAN on both DC's?
  • Same VM Networks/DVS Portgroups on both Source/Destination clusters/Hosts?

       ....

I am more interested in the connectivity requirements and how would one make such migration technically possible, yes there will be other requirements such as EVC, licensing etc.. but the focus is on connectivity requirements as I know that vSphere supports long distance vMotion but could not find much detail about underlying requirements (Could be because this lies beyond vSphere)

Thanks

0 Kudos
5 Replies
scott28tt
VMware Employee
VMware Employee

Start here: Migrate a Virtual Machine to a New Compute Resource and Storage in the vSphere Client


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
daphnissov
Immortal
Immortal

I think my book should answer most of these questions for you.

0 Kudos
andvm
Hot Shot
Hot Shot

Taking a look....seems this is the scenario:

https://chipzoller.gitbook.io/vspheremigration/scenario-2-inter-cluster-vm-migrations/inter-c-04

these are the network requirements:

"Network connectivity – Source and destination hosts must be able to communicate between their vmkernel ports tagged for vMotion. The link speed should be 1 Gbps or better. If vMotion vmkernel network connectivity is not possible, then management vmkernels must communicate (cold migration only)."

WIll dig further to see if there is any information about possible options/examples on how to achieve the above vmkernel connectivity when clusters are in different datacenters but as I said this is likely out of scope for vSphere with regards to how you do it. Still I am interested to know more in different ways it can be achieved examples:

Stretched L2 between datacenters or L2 <-> L3 <-> L2 or VXLAN (not particularly familiar with the latter but from what I have seen it might be one way of achieving such connectivity)

Thanks

0 Kudos
daphnissov
Immortal
Immortal

As you point out, this is outside the scope of vSphere's domain but rather a general network design concern. Whatever you end up doing to achieve that connectivity is up to you, but what matters is that, ultimately, those two vmkernel ports tagged for vMotion can intercommunicate.

andvm
Hot Shot
Hot Shot

Ok so key point is that the vmkernels (source/destination) need to be able to reach each other and which vmkernel is used varies by migration type:

vMotion -            vMotion vmkernels

Cold migration - Management vmkernels (unless Provisioning vmkernel is set)

etc.. for FT, Replication and any additional services

What about compute+storage vmotion from a vSAN datastore cluster to a different vSAN datastore cluster - which vmkernels are used in this case? (as the vSAN vmkernels should only be able to talk to the Hosts in the same cluster which are forming the same vSAN datastore)

Thanks

0 Kudos