I am hoping someone can help me understand why I am seeing what I am seeing and how to get things configured correctly.
My configurations is as follows:
2x ESXi Hosts with 7.0U3N installed (Dell R650xs (ESX1) & Dell R640 (ESX2))
DNS A/reverse records have been created for the 2 hosts & the VCSA. All names can be pinged successfully (minus VCSA at this point as it hasn't been deployed, after deployment the name resolves correctly)
I ran the VCSA installer (7.0U3N) and pointed it at ESX1. The installer kept failing when I used the FQDN of ESX1, but when I used IP it deployed fine. From what I understand the FQDN failure is not uncommon.
I then logged into the VCS, created a datacenter, then a cluster, then was brought to the quick start wizard to add hosts. I entered both hosts into the wizard which seemed okay at first, but then it complained that there were running VMs on one (VCSA) and they needed to be shutdown to be able to import a host into a cluster. Am I supposed to have a 3rd server sitting on the side to just host the VCSA?
What I did was initially only import ESX2 into the cluster. I thought I would have to then shutdown the VCSA, migrate it to ESX2 and then add ESX1 to the cluster. That did not go well at all as my only option to migrate was to export to OVF, but I did that anyway and when I went to import the VCSA OVF onto ESX2 it complained that the VM would only work on ESX1 (something to that effect). When I tried to power on the VCSA on ESX1 it failed to power on because of it's current state (powered off). It took a reboot of ESX1 to allow me to power up the VCSA again.
At this point I thought maybe I was overthinking things and that I could just import ESX1 with the VCS running as VMWare would have accounted for this scenario. When I did go to import ESX1 into the cluster it threw the regular warnings about VMs running which I just pushed through and ESX1 looked to be immediately added to the cluster (no maintenance mode or anything). However now the wizard is stuck saying that all hosts are in maintenance mode (though neither of them are) and clicking on re-validate does not change anything.
What am I doing wrong?
Adding a host to a cluster ask for host to be in maintenance mode. So instead adding host to cluster directly, add the host to datacenter first then drag and drop or move the host to cluster.
As Sachchidanand has said add host to datacenter not cluster. There is no need to export/import vCenter.
Put host without vCenter in maintenance mode, put host into cluster and take out of maintenance mode.
vMotion vCenter to host in cluster
Put second host into maintenance mode once vCenter has moved, move host to the cluster and take out of maintenance mode.
Thank you for the tips. I have tried to perform the migration of VCSA to the 2nd ESXi however I am getting an error saying that I have to enable EVC on the cluster to migrate. I checked the KB article, checked the compatibility matrix site on vmware and enabled EVC on the cluster to the "latest" CPU version that is allowed.
This EVC error will not go away so I cannot migrate the VCSA. The details are as follows:
MCD_NO is not supported
RSBA_NO is not supported
IBRS_ALL is not supported
RDCL_NO is not supported
3DNow! PREFETCH and PREFETCHW are unsupported
The 3DNow error was not in the list before I enabled EVC.
Note that this error is presented even when there is no cluster defined nor are any of the hosts in a cluster.
Any additional help is greatly appreciated!