I am working to replace hosts that boot from usb with internal storage (SSD).
1. Backed up all my hosts with Get-VMHostFIrmware commands to the proper .tgz zip files
2. put host in maintence mode, move out of cluster to isolate it.
3. change boot device/reload, patch to same level and then restore using the Set-VMHostFirmware command.
4. boot, test and reintroduce into cluster
I have done a bunch of hosts and they have gone fine......but i have one that i have just done and after reloading the host and restoring the config, the host booted up showing that it had 50+ vms on it. Most are showing generic numbers with a status of "invald" and and a smaller number of vms on the host showing proper vm names and powered off status.
The ones that show the proper name on the host are showing also in the cluster as powered off.
The randomly numbered (invalid status) ones on the host i suspect are vms that are running in the currently running cluster.
I assume what might have happened is the host has registered all the vms onto itself that are currently running in the active cluster or were active on this host prior to the backup being taken and it was part of the backup?
This host seems to have a bunch of stale vms on it now and i guess im nervous to "unregister" the named and "numbered" ones as im afraid that might affect the production vms in the cluster. I guess im not sure how it would affect them if you are unregistering them from a isolated host. I guess im not sure where to go from here with this host.
Its odd to me that this host kept all its "registered" vms on it via the backup. The backups where taken across the whole cluster of hosts in an active cluster and i havent see this problem so far.
Typically i restore the host.....it reboots......comes back up with no vms but configured correctly. I throw it back in the cluster, take it back out of maintenance mode and its done.
Hello,
This does sound like a peculiar situation. However, based on your description, it seems like you're observing residual metadata of VMs that might have been running on the host before the backup was taken. During the restore operation, it might have brought back the VM metadata along with the host configuration data.
The 'invalid' status of VMs usually means that ESXi is unable to find the .vmx file for those VMs, likely because they don't actually exist on this host's datastore anymore. It's worth noting that host backups via Get-VMHostFirmware and Set-VMHostFirmware don't include the actual VM files; they're mainly for host configurations.
Your assumption about these VMs possibly being active in the cluster elsewhere seems plausible. They might still be running on other hosts in the cluster, and what you're seeing is just a "ghost" of their presence on this host from before the backup was taken.
Here's how I would suggest proceeding:
Verify that these VMs are indeed running on other hosts in your cluster. If they are, it should be safe to unregister the stale entries from the restored host without affecting the production VMs. This is because the VM registration information on each host is independent of other hosts.
If the VMs showing proper names and a powered off status are supposed to be on this host, make sure their associated files (including .vmx) are present on the host's datastore. If they are, you can try to re-register these VMs manually.
If you still see inconsistencies or issues, you might want to consider re-installing ESXi on the problematic host instead of restoring from backup. Then, reconfigure it manually or by using Host Profiles, if you have them set up. This could help ensure there's no leftover data from the backup causing these issues.
Remember to always have current backups of your VMs and configurations. I hope this helps, and please keep us updated on your progress.