VMware Cloud Community
T_16
Enthusiast
Enthusiast

Cross vCenter vMotion Utility, importing VMs into new cluster, and hardcore tech question!

Ive setup two new clusters each at different geographical sites. Ive given the hosts in each cluster private provisioning and hot-vmotion traffic ip addresses on dedicated vmkernels, this is to make use of a GRE tunnel between two core server switch stacks at each separate datacenter. This is preferable as inter DC traffic is MUCH faster and closer to line speed between the geo-locations. Using our regular MPLS network, the traffic rate can drop from GRE tunnel 440Mbps to as low as 170mbps.


All good so far as we can cold migrate and hot migrate between clusters in two different sites, but means our provisioning traffic vmkernel has a 192.168.x.x address as does the hot vmotion kernel.

This means the Cross vCenter vMotion Utility fling fails to import/clone some old VMs from our other setups into the new, as when connected to the target vcenter it MUST surely be looking to copy cold traffic over to the generally accessible management interface that has a "normal" routable ip in our networks.

I am at a loss as to what we can do here, can vmware converter copy files directly into a vcenter like an ovf import does? We can import OVFs no issue to our new setup, but again, I suspect the fling wants to send provisioning traffic to the mgmt ip address of the esxi host it picks from the cluster.

I guess it sounds like we have no choice but to give all of our hosts another 2nd ip on the network just for cold provisioning traffic, OR let all cold traffic go via mgmt as default, but this is super annoying, as it means that when doing a storage vmotion, hot traffic will be fast via our GRE tunnel, and cold will be super slow via the rest of our standard MPLS network. Dont get me wrong, the GRE tunnel still uses the SAME MPLS backbone, but its encapsulation seems to make traffic funnel through so much faster.

Any thoughts/advice welcome, I had thought that it was possible two vcenters could speak with themselves regarding traffic flow instead of brokering the connection directly to the host. I am confused what happens with an Ova/Ovf import then, as surely the traffic is brokered to a host to store into the datastore? ova import works perfectly for us.

Sorry for the ramble, but I feel stuck.

EDIT:- the reason for this separation above, is that our 1gbe vmk0 mgmt stuff is on old 1gbe copper, and inter-host stuff is all 10gbe, so it makes sense that if we wanted to do a cold migration/storage migration between hosts in the same cluster, we make use of the rapid 10gbe local speed.

EDIT2: Ok so Vmware converter works for a vm powered off, but the fling fails for the same vm with "Cannot connect to host" !

Does VMware Converter work in a different way to the fling?

Reply
0 Kudos
2 Replies
daphnissov
Immortal
Immortal

as when connected to the target vcenter it MUST surely be looking to copy cold traffic over to the generally accessible management interface that has a "normal" routable ip in our networks.

I suspect the fling wants to send provisioning traffic to the mgmt ip address of the esxi host it picks from the cluster.

Yes, that's how "cold" migrations work because these aren't vMotions of any sort. This traffic is considered NBD (network block device) and is sent from the source host where the VM is registered to the destination host via the Management vmkernel interface.

More about this and other migrations in my book The Ultimate vSphere Virtual Machine Migration Guide

Reply
0 Kudos
ZibiM
Enthusiast
Enthusiast

vMotion requires connectivity over the vMotion vmkernel

Storage vMotion requires connectivity over the NFC enabled vmkernel (dedicated provisioning or default management)

If you'd like to do shared-nothing vmotion between the sites you need to have L3 connectivity on those vmkernels

You state that both your vmotion and provisioning vmkernels are utilizing 192.168.x.x addresses, which more often than not means L2 only.

Please check if you can establish gateways on these subnets

If yes then you just need to configure gateways on the relevant vmkernel stacks and you are good to go.

One word of caution - make sure your distributed switches are configured with the right MTU for the cross site traffic.

In my case I had to reduce that due to the WAN limits

Reply
0 Kudos