We had an ESX host suffered a hard fail and the RAID controller and motherboard had to be replaced. Fail-over kicked in to ensure the majority of VMs were moved over through vMotion, except for two VMs. Since the server was down, there wasn't a way to graceful perform anything, so the Systems Administrator removed it from the cluster while the host was offline. On top of this, the server room experienced and entire power failure and the on-site crew turned everything back on and somehow the failed host was up and running with no issues, other than it was now not a member of the cluster. Despite this, the server was shutdown again Dell came out and replaced the motherboard and RAID controller per their recommendations from the previous reported failures. When the host was turned back on, all the VMs on that host are in an invalid state. I am unfamiliar with the process of re-introducing a host back into a DC/Cluster when the VMs (that are all currently running without issue on the other host) are reporting invalid on the newly repaired host.
Do we just add the host back to the DC/Cluster with the VMs in an invalid status? Or do we remove the invalid VMs from inventory on the newly repaired host, and then add it back to the DC/Cluster?
Any guidance is greatly appreciated!