VMware Cloud Community
7007VM7007
Enthusiast
Enthusiast

vMotion fails: different address families

I have a two node vSphere 6.5 Update 1 cluster in my test environment. Yesterday I rebuilt the first host as I wanted to start using/testing Secure Boot. Enabling UEFI/Secure Boot worked well and I reconfigured the host after reinstalling it. I configured all the IPv4/IPv6 addresses the same as before, and the DNS entries remained the same and I configured the vmkernels the same.

The problem I have is that when I try to vMotion a VM from the second host (which hasn't changed) to the first host (the host that has been rebuilt with Secure Boot), I get the following error in vCenter:

pastedImage_0.png

My management IPs are the following for host one:

192.168.30.7

xxxx:470:6c28:30::7

and for host two:

192.168.30.8

xxxx:470:6c28:30::78

I am *not* using the management network for vMotion as I have configured a separate vMotion network/vmkernels for this and the IP range for this network is: 192.168.70.x

Can anyone explain to me why I can't vMotion and why I am getting the error in the attached screenshot? I've gone over the config a dozen times but can't find anything wrong with it!

Thanks!

0 Kudos
4 Replies
RvdNieuwendijk
Leadership
Leadership

It looks like one host is using IPv4 and the other host is using IPv6 for the vMotion network. Maybe you should disable IPv4 or IPv6 and use only one IP stack.

Blog: https://rvdnieuwendijk.com/ | Twitter: @rvdnieuwendijk | Author of: https://www.packtpub.com/virtualization-and-cloud/learning-powercli-second-edition
0 Kudos
7007VM7007
Enthusiast
Enthusiast

But I only use IPv4 on the vMotion network.

The IP addresses shown in the screenshot are my management IPs.

0 Kudos
rbihlmeyer
Expert
Expert

Had the same problem with vCenter/ESXi 6.7U1: all servers in this cluster had dual-stack management vmkernel ports (v4 and v6 addresses). vmotion vmkernel ports were v4-only. Some servers were insisting on using their v6 address while others wanted to talk via their v4 address. vmotion was possible in the "v4 partition" and in the "v6 partition", but not from one to the other.

What worked for me was Disconnecting and Re-Connecting all the ESXi servers of the "v4 partition". That was possible with VMs still running on the server in question, so no outage experienced.

FWIW, what did not work: restarting management services, putting a host in maintenance and rebooting.

--
Robert Bihlmeyer / ASSIST / Arrow ECS GmbH
coltex-support
Contributor
Contributor

What I found that worked for me.

I removed IPv4 from the esx hosts, with just IPv6 all the hosts used the same address family for the vmk family.

Made sure that the vcenter appliance has working IPv6, somehow it keeps getting disabled on the vcenter 6.7 appliance, using the console I enabled IPv6, restart management, add prefix, restart management. Works.

Now you can upgrade the hosts using the update manager, the ISO is local, so that works, but the other updates are not reachable over IPv6 only Smiley Sad so are not applicable.

After upgrading the entire cluster over IPv6 with DRS shuffling the VMs around I added the v4 addresses back to each host because otherwise I can't perform other updates.

0 Kudos