doncanton
Contributor
Contributor

Migrating a vCenter Appliance to a Cluster

Hello, I am trying to deploy a vCenter Appliance 6.7 into a cluster.  I already deployed the appliance to ESXi host, now I want to add the host to a cluster I setup in the vCenter, but I cannot place the Host in maintenance mode to move into the cluster because I would need to power down the vCenter.  I tried migrating the vCenter vm to another host already in the cluster, but the migration is failing.  I thought of enabling Per-VM EVC on the vCenter Appliance as that allowed me to migrate another VM into the cluster, but the Per-VM EVC option is missing when I view the settings of the VM when I log on to its Host.  Anybody has any suggestions?

Any help will be greatly appreciated.

Regards,

Federico

0 Kudos
8 Replies
daphnissov
Immortal
Immortal

You don't need to put a host into MM to add it to a cluster. If you have an existing cluster which is already using EVC, then yes, you'd need to not have any running VMs on the host which you wish to add. Obviously, this would be a problem with a vCSA running on it, so you could instead deploy vCSA to a host inside ​that cluster so it comes up with the necessary masks applied.

0 Kudos
doncanton
Contributor
Contributor

Thank you.  I tired to move the VM into the cluster before setting up EVC on it, but it was not working. I can deploy a vCenter VM into the cluster, but when I setup the cluster in the newly deploy vCenter its host will not be in the cluster at the start, so I would be back at the beginning, I think.  If feel like this is a bit like the chicken and the egg.

Federico

0 Kudos
daphnissov
Immortal
Immortal

You're going to have to post more specific information, error messages, screenshots, whatever.

0 Kudos
doncanton
Contributor
Contributor

This is what I get when I try to migrate the vCenter appliance:

The target host does not support the virtual machine's current hardware requirements.

Use a cluster with Enhanced vMotion Compatibility (EVC) enabled to create a uniform set of CPU features across the cluster, or use per-VM EVC for a consistent set of CPU features for a virtual machine and allow the virtual machine to be moved to a host capable of supporting that set of CPU features. See KB article 1003212 for cluster EVC information.

CPUID faulting is not supported.

com.vmware.vim.vmfeature.cpuid.ibpb

com.vmware.vim.vmfeature.cpuid.ibrs

RDTSCP is unsupported.

1 GB pages are not supported (PDPE1GB).

3DNow! PREFETCH and PREFETCHW are unsupported.

XSAVE YMM State is unsupported.

XSAVE SSE State is unsupported.

com.vmware.vim.vmfeature.cpuid.ssbd

com.vmware.vim.vmfeature.cpuid.fcmd

com.vmware.vim.vmfeature.cpuid.stibp

com.vmware.vim.vmfeature.cpuid.mdclear

Fast string operations (Enhanced REP MOVSB/STOSB) are unsupported.

Supervisor Mode Execution Protection (SMEP) is unsupported.

Instructions to read and write FS and GS base registers at any privilege level are unsupported.

RDRAND is unsupported.

Half-precision conversion instructions (F16C) are unsupported.

Advanced Vector Extensions (AVX) are unsupported.

XSAVE is unsupported.

AES instructions (AES-NI) are unsupported or disabled in the BIOS. See KB 1034926.

POPCNT is unsupported.

MOVBE is unsupported.

SSE4.2 is unsupported.

SSE4.1 is unsupported.

PCID is unsupported.

FMA3 is unsupported.

Carryless multiply (PCLMULQDQ) is unsupported or disabled in the BIOS. See KB 1034926. 

  hostname

  The vMotion interface is not configured (or is misconfigured) on the "Source" host 'hostname'. 

  hostname

  A general system error occurred:

The IP address family of source vMotion nic (IPv6) does not match the destination (IPv4). If you proceed with the operation, there is a high likelihood of operation failure. 

  hostname

0 Kudos
daphnissov
Immortal
Immortal

Can you show what you have or describe in detail? What you've posted suggest you have EVC enabled and that you have a host without vMotion enabled.

0 Kudos
bansne
Enthusiast
Enthusiast

Hi,

Few things to check

1) storage is shared across clusters from and too?

2) network/vlan is accessible over two clusters?

3) EVC will not work if you got different hardware in two clusters , you enabled evc with intel etc/? what we have in other cluster host?

This all pre-requites is to live migrate VM

4) What is the error you see when you say migration is failing? can you paste full error.

Try tail -f /var/log/vpxd.log  while you initiate migration to see cause of failure.

Regards

0 Kudos
doncanton
Contributor
Contributor

I have two hosts, a ProLiant DL380 Gen9, Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz, outside the Cluster.  This host contains the vCenter appliance.  The cluster has VMWare EVC enabled.  In the cluster there is already another host, a ProLiant DL380 Gen9,Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz .  Ideally, I would move the vCenter appliance VM to the host already in the cluster, but when I attempt to migrate I get a validation error with above message.  I don't see how to enable EVC in the host outside the cluster.  I also don't see how to enable Per-VM EVC on the vCenter appliance.  I could enable Per-VM EVC on the vCenter appliance and migrate it to the cluster, it would fix my problem, but I don't see how.

Thank you

Federico

0 Kudos
doncanton
Contributor
Contributor

There is no shared storage between the servers.  They do share the network, they are in the same subnet I can ping each other with no problem.  I enabled EVC in the cluster with Intel Merom Generation mode.  I was able to migrate another VM to the cluster when is set its EVC to the same mode.  The hardware on both hosts are the same, except that one has  CPU E5-2640 while the other  E5-2660.  The migration is not attempted. I cannot get past the validation stage.

pastedImage_0.png

I tried to run tail -f /var/log/vpxd.log, but that file is not on the server.  The server has a vpxa.log file, which I this output while I tried the migration again:

[root@localhost:~] tail -f /var/log/vpxd.log
tail: can't open '/var/log/vpxd.log': No such file or directory
tail: no files
[root@localhost:~] ls /var/log/
Xorg.log                     hostd.log                    sdrsinjector.log             vmkdevmgr.log
auth.log                     hostdCgiServer.log           shell.log                    vmkernel.log
boot.gz                      hostprofiletrace.log         sockrelay.log                vmkeventd.log
clomd.log                    init.log                     storagerm.log                vmksummary.log
clusterAgent.log             iofilter-init.log            swapobjd.log                 vmkwarning.log
cmmdsTimeMachine.log         iofiltervpd.log              sysboot.log                  vmware
cmmdsTimeMachineDump.log     jumpstart-esxcli-stdout.log  syslog.log                   vobd.log
configRP.log                 jumpstart-native-stdout.log  tallylog                     vprobe.log
cryptoloader.log             jumpstart-stdout.log         upitd.log                    vpxa.log
ddecomd.log                  kickstart.log                usb.log                      vsandpd.log
dhclient.log                 lacp.log                     vdfsd-proxy.log              vsanmgmt.log
epd.log                      loadESX.log                  vdfsd-server.log             vsansystem.log
esxupdate.log                nfcd.log                     vitd.log                     vvold.log
fdm.log                      osfsd.log                    vlf
hbragent.log                 rabbitmqproxy.log            vlf-ts
hostd-probe.log              rhttpproxy.log               vmauthd.log
[root@localhost:~] ls /var/log/vpxa.log
/var/log/vpxa.log
[root@localhost:~] tail -f /var/log/vpxa.log
--> Args:
-->
--> Arg catalogChangeSpec:
--> (vim.vslm.CatalogChangeSpec) {
-->    datastore = 'vim.Datastore:ds:///vmfs/volumes/5dadf7f0-6ec46408-9161-f40343591578/',
-->    startVClockTime = (vim.vslm.VClockInfo) {
-->       vClockTime = 1
-->    },
-->    fullSync = false
--> }
2019-11-05T23:58:49.417Z info vpxa[2131508] [Originator@6876 sub=vpxaInvtHost opID=SWI-30e437e3] Increment master gen. no to (175): ResourcePool:VpxaInvtHostResPoolListener::ConfigChanged
2019-11-05T23:58:55.687Z info vpxa[2131147] [Originator@6876 sub=vpxLro opID=HB-host-69@175-43cb9874-28] [VpxLRO] -- BEGIN lro-830 -- vpxa -- vpxapi.VpxaService.getChanges -- 527d63c0-b43e-16da-3994-714349e73421
2019-11-05T23:58:55.687Z info vpxa[2131147] [Originator@6876 sub=vpxLro opID=HB-host-69@175-43cb9874-28] [VpxLRO] -- FINISH lro-830
2019-11-05T23:59:05.491Z info vpxa[2131194] [Originator@6876 sub=vpxLro opID=PollQuickStatsLoop-1f9ef85f-6c] [VpxLRO] -- BEGIN lro-831 -- vpxa -- vpxapi.VpxaService.fetchQuickStats -- 527d63c0-b43e-16da-3994-714349e73421
2019-11-05T23:59:05.492Z info vpxa[2131194] [Originator@6876 sub=vpxLro opID=PollQuickStatsLoop-1f9ef85f-6c] [VpxLRO] -- FINISH lro-831
2019-11-05T23:59:25.677Z info vpxa[2131137] [Originator@6876 sub=vpxLro opID=sps-Main-653660-360-bc-e3] [VpxLRO] -- BEGIN task-476 -- catalogSyncManager -- vim.vslm.host.CatalogSyncManager.queryCatalogChange -- 527d63c0-b43e-16da-3994-714349e73421
2019-11-05T23:59:25.688Z info vpxa[2131146] [Originator@6876 sub=vpxLro opID=sps-Main-653660-360-bc-e3] [VpxLRO] -- FINISH task-476
2019-11-05T23:59:25.688Z info vpxa[2131146] [Originator@6876 sub=Default opID=sps-Main-653660-360-bc-e3] [VpxLRO] -- ERROR task-476 -- catalogSyncManager -- vim.vslm.host.CatalogSyncManager.queryCatalogChange: vim.fault.NotFound:
--> Result:
--> (vim.fault.NotFound) {
-->    faultCause = (vmodl.MethodFault) null,
-->    faultMessage = <unset>
-->    msg = "The object or item referred to could not be found."
--> }
--> Args:
-->
--> Arg catalogChangeSpec:
--> (vim.vslm.CatalogChangeSpec) {
-->    datastore = 'vim.Datastore:ds:///vmfs/volumes/5da8f3ae-a87d5cc8-9d22-f40343591578/',
-->    startVClockTime = (vim.vslm.VClockInfo) {
-->       vClockTime = 1
-->    },
-->    fullSync = false
--> }
2019-11-05T23:59:26.684Z info vpxa[2131194] [Originator@6876 sub=vpxLro opID=sps-Main-653660-360-25-d4] [VpxLRO] -- BEGIN task-477 -- catalogSyncManager -- vim.vslm.host.CatalogSyncManager.queryCatalogChange -- 527d63c0-b43e-16da-3994-714349e73421
2019-11-05T23:59:26.694Z info vpxa[2131150] [Originator@6876 sub=vpxLro opID=sps-Main-653660-360-25-d4] [VpxLRO] -- FINISH task-477
2019-11-05T23:59:26.694Z info vpxa[2131150] [Originator@6876 sub=Default opID=sps-Main-653660-360-25-d4] [VpxLRO] -- ERROR task-477 -- catalogSyncManager -- vim.vslm.host.CatalogSyncManager.queryCatalogChange: vim.fault.NotFound:
--> Result:
--> (vim.fault.NotFound) {
-->    faultCause = (vmodl.MethodFault) null,
-->    faultMessage = <unset>
-->    msg = "The object or item referred to could not be found."
--> }
--> Args:
-->
--> Arg catalogChangeSpec:
--> (vim.vslm.CatalogChangeSpec) {
-->    datastore = 'vim.Datastore:ds:///vmfs/volumes/5dadf7f0-6ec46408-9161-f40343591578/',
-->    startVClockTime = (vim.vslm.VClockInfo) {
-->       vClockTime = 1
-->    },
-->    fullSync = false
--> }
2019-11-06T00:00:05.019Z info vpxa[2131376] [Originator@6876 sub=vpxLro opID=440bff5f] [VpxLRO] -- BEGIN lro-834 -- vpxa -- vpxapi.VpxaService.querySummaryStatistics -- 527d63c0-b43e-16da-3994-714349e73421
2019-11-06T00:00:05.055Z info vpxa[2131376] [Originator@6876 sub=vpxLro opID=440bff5f] [VpxLRO] -- FINISH lro-834
2019-11-06T00:00:05.491Z info vpxa[2131144] [Originator@6876 sub=vpxLro opID=PollQuickStatsLoop-1f9ef85f-df] [VpxLRO] -- BEGIN lro-835 -- vpxa -- vpxapi.VpxaService.fetchQuickStats -- 527d63c0-b43e-16da-3994-714349e73421
2019-11-06T00:00:05.491Z info vpxa[2131144] [Originator@6876 sub=vpxLro opID=PollQuickStatsLoop-1f9ef85f-df] [VpxLRO] -- FINISH lro-835

0 Kudos