OK I am patching 3 hosts which were working perfectly. I patch the first and join it back to the Cluster without hitch. I then patch the second and on trying to restart HA get the dreaded Internal AAM error. I disabled HA for the Cluster and then re-enabled it and this time the second host enables HA without error then the first does likewise but the third then fails. So I then migrate all the machines from the first host to the second and place the first host in maintenance mode. Next I disable and re-enable HA again the second host and then the third host enable HA without error. I then take the first host out of maintenance mode and it joins the cluster without error. This to me proves the problem is nothing to do with the configuration of the hosts (or am I missing something) as if a host has a real problem with aam then it will always have that problem. If someone could please explain this and come up with a fix as this appears to happen frequently not only to me but to other users.
Ian Scott.
Hi Ian,
Have a read of this: http://www.yellow-bricks.com/vmware-high-availability-deepdiv/
It explains HA in more depth. If you notice down the article that a host entering maintenance mode causes a HA election. Often during patching all these elections can make these errors occur. I normally disable HA until after the patching has completed and the re-enable. Not a perfect solution but a solution.
Kind regards,
Glen
Hi Ian,
Have a read of this: http://www.yellow-bricks.com/vmware-high-availability-deepdiv/
It explains HA in more depth. If you notice down the article that a host entering maintenance mode causes a HA election. Often during patching all these elections can make these errors occur. I normally disable HA until after the patching has completed and the re-enable. Not a perfect solution but a solution.
Kind regards,
Glen
Glen,
Thanks, I tried that and it appears to work. Very flaky having to disable HA everytime a host is removed or added.
Ian Scott
