I updated a VM's Hardware to match host first. After that completed, I went to update the VMware Tools to match host but now I get a red bar with a message stating
An unexpected error has occurred.
Every time I go to the Updates tab for this VM, I get this message. I also get this message when I go to the Host and click the Updates tab > VMware tools. IF I move the VM off this host and then go back to the Updates tab of the host, it works fine and tells me an overview of what VMs need updates for VMware Tools.
I was able to update the VMware tools of this VM by clicking an update now link in the summary tab of the VM. I confirmed the newer version on the VM via server itself. This errror though still remains for this VM. I have restarted the upgrade manager service in the VMware Appliance but sadly no luck.
I have attached some screenshots of this error. Any ideas on a solution?
Restarting the VCSA did not fix our issue as well. I have had two technicians from VMware look at it so far, and they have spent around 3 hours in Webex-based troubleshooting. Several bundles of logs have been sent to VMware and I'm waiting to hear back from them.
I was facing the same issue with my vcenter 6.7 Update 3 and found out that it was caused by one of the zombie VM (it's register in vcenter) but has got no attached VMDK. it's causing all the trouble.
Same issue here in vCenter 6.7U2. As mentioned correctly above, it seems to be a bug in the VUM HTML5 GUI and how it handles VMs without virtual disks connected. In my case, SRM "placeholder" VMs cause the problem (they are essentially VMX files without VMDKs attached). I tested this in our lab very easily. When SRM placeholder VMs are present in the vCenter inventory, Update Manager displays the error "An unexpected error has occurred" under "VMware Tools". As soon as I delete the SRM placeholder VMs (i.e. delete all SRM Protection Groups), I can navigate back to Update Manager in the HTML5 GUI and the "VMware Tools" display works great and I can "Check Status" to see VMware Tools status. As soon as I protect a VM with SRM and a placeholder VM is created, however, the VUM GUI breaks again once I click "Check Status" and it scans the SRM placeholder VMs.
So long story short this appears to be a bug in the HTML5 web UI that needs to be fixed.
The work-around for me was to use the legacy Flash/Flex UI for Update Manager, which seems to correctly classify the SRM placeholder VMs as "Unknown" without breaking any other UI workflows.
Just fixed this issue. We had to re-register the update manager extension and change ownership of the integrity xml file as it loses its permissions and defaults to root/root.
1. Take a snapshot
2. SSH to the VCSA(Version 22.214.171.124000)
3. Run the following to re-register the extension:
/usr/lib/vmware-updatemgr/bin/vmware-vciInstallUtils -C /usr/lib/vmware-updatemgr/bin/ -L /var/log/vmware/vmware-updatemgr/ -I /usr/lib/vmware-updatemgr/bin/ -v YOURVCSAFQDN -p 80 -U firstname.lastname@example.org -P 'REALPASSWORDHERE' -S /usr/lib/vmware-updatemgr/bin/extension.xml -O extupdate
4. The file vci-integrity.xml only has root user permissions. Give ownership to updatemgr.
chown updatemgr:updatemgr vci-integrity.xml
5. Restart the update manager service.
We, or the support from VMware, tried all of the above fixes to no avail. Through some testing with the help of vMotion, I figured out that it was being caused by one of the virtual machines. VMware claims that the issue was that the following lines were missing from the vmx file:
tools.upgrade.policy = "manual"
toolScripts.afterPowerOn = "TRUE"
toolScripts.afterResume = "TRUE"
toolScripts.beforeSuspend = "TRUE"
toolScripts.beforePowerOff = "TRUE"
tools.guest.desktop.autolock = "FALSE"
The only "tool" related line in our vmx file was:
tools.syncTime = "FALSE"
After recreating the virtual machine the issue went away. However, the above configuration lines are still missing and it's not causing a problem. VMware was eager to close the ticket, but I think the underlying bug still remains.
We have three ESXi hosts, and while the issue is present at the cluster level, it was only present on one of the three hosts. By vMotioning VMs I discovered that the issue went moved between hosts. With some more vMotioning between the hosts, I narrowed it down to a single VM.