Upgrading one ESX 4.0 U1 server to U3 in an HA cluster that has 3 nodes
After leaving the HA cluster, the U3 upgrade went fine for host1; however joining back to the cluster failed with
"...Internal AAM Error - agent could not start..."
AAM logging showed
Backbone Terminated: Cannot start <backbone>: DNS lookup failure for <hostname of host2>
This prompted the addition in host1's /etc/hosts with the entry <hostname of host2> so this file was then
127.0.0.1 localhost
::1 localhost
129.xx.xx.xx <hostname of host1>
129.xx.xx.xx <hostname of host2> # Added by VMware HA
The HA joining for host1 was then successful.
1. Should /etc/hosts of every node in the cluster have an entry for each of the cluster nodes?
2. HA configuration of this 4.0 U1 shows all 3 nodes as Primary and none is assigned as Primary Master.
Under ESX 4.1, logging from "aam_config_util_monitornodes.log" does show one cluster node as a Primary Master.
This log file "aam_config_util_monitornodes.log" is also absent in 4.0 U1 HA. Is this a feature in ESX 4.1 only?
Thanks for your comments
Hong