I am working to replace hosts that boot from usb with internal storage (SSD).
1. Backed up all my hosts with Get-VMHostFIrmware commands to the proper .tgz zip files
2. put host in maintence mode, move out of cluster to isolate it.
3. change boot device/reload, patch to same level and then restore using the Set-VMHostFirmware command.
4. boot, test and reintroduce into cluster
I have done a bunch of hosts and they have gone fine......but i have one that i have just done and after reloading the host and restoring the config, the host booted up showing that it had 50+ vms on it. Most are showing generic numbers with a status of "invald" and and a smaller number of vms on the host showing proper vm names and powered off status.
The ones that show the proper name on the host are showing also in the cluster as powered off.
The randomly numbered (invalid status) ones on the host i suspect are vms that are running in the currently running cluster.
I assume what might have happened is the host has registered all the vms onto itself that are currently running in the active cluster or were active on this host prior to the backup being taken and it was part of the backup?
This host seems to have a bunch of stale vms on it now and i guess im nervous to "unregister" the named and "numbered" ones as im afraid that might affect the production vms in the cluster. I guess im not sure how it would affect them if you are unregistering them from a isolated host. I guess im not sure where to go from here with this host.
Its odd to me that this host kept all its "registered" vms on it via the backup. The backups where taken across the whole cluster of hosts in an active cluster and i havent see this problem so far.
Typically i restore the host.....it reboots......comes back up with no vms but configured correctly. I throw it back in the cluster, take it back out of maintenance mode and its done.