VMware Cloud Community
Cymraes
Enthusiast
Enthusiast

vSAN HA Master host election

I have a HCI configured as a vSAN stretched cluster across 2 sites with a 3 witness site of course.


The Master Host for the HA Cluster has been elected on the secondary site, with all 3 fault domains up and running and active I expected the Master host election to select a host on the primary site. 

In an attempt to get the Master to move to the primary fault domain, I disabled and enabled HA on the Cluster hoping the re-election of Master to the primary site but it remains on the secondary. I have also selected 'Reconfigure host for HA' on all hosts at the secondary site hoping this would prompt election of master on the primary, but to no avail.

Is it normal for the Master HA Host to be on the secondary site?
If not, then is a way to force a host on the primary site to become the HA Master?

any advice please?

thanks in advance

Lynwen

 

Reply
0 Kudos
6 Replies
TheBobkin
Champion
Champion

@Cymraes , while vSphere HA/FDM is involved in VM restart, where VMs can/will restart in a vSAN stretched cluster is always going to be based on 1. availability of the Objects and 2. the set preferred Fault-Domain - e.g. if an inter-site link failed then the site marked as Preferred would be clustered with the Witness and the non-Preferred site basically not participating in the cluster (and thus not have quorum on any PFTT=1 Objects and them be inaccessible to it), the data Objects would only be accessible from the Preferred site and thus VMs could only be restarted there.

 

If @depping or whomever wants to weigh in on this further (as it is not my area of expertise and @depping eats this stuff for breakfast 😂), my understanding of HA/FDM is that it works on a 'deadman-switch' principal but this won't attempt to restart a VM (or at the very least this won't succeed) on a node that doesn't have access to an Object (e.g. because it lost quorum).

@Cymraes, if this is a new cluster with no (important) VMs on it then please go through the various batteries of testing anything that you question the reaction of here - this is the time for this, not 6 months down the line when you find out something is amiss (like the network topology isn't layed out how it is documented to be 😑).

depping
Leadership
Leadership

TheBobkin is right. Where and when HA can restart VM is based on the accessibility of objects and the state of the cluster. Keep in mind that if a partition occurs (Preferred and Secondary locations cannot communicate) then both sides of the cluster will get a "master FDM" node. (until the partition is lifter)

\The host that becomes master typically is the host with the most datastores connected. If all hosts have the same number of datastores, the host with the highest MOREF ID will be picked, this is probably why you see a host in the secondary site being selected as the master host.

What you can do to influence which host is picked as a master is set the advanced setting called "fdm.nodeGoodness" on a host level. This will make a particular host favorite of others during the election. More details here: https://kb.vmware.com/s/article/80594

Cymraes
Enthusiast
Enthusiast

@TheBobkin @depping thank you both for responding.

In addition to my first post, I need to add that recently we had to shut down our primary datacentre, the Preferred site, due to electrical maintenance, so as expected the VMs restarted at the non-preferred and a host on that site became master.

When the preferred site was online again then I expected HA election to set "master FDM" node on the preferred site.   This I did NOT happen.

The hardware is Dell EMC VxRail with 10 nodes on preferred and 9 nodes on non-preferred site.  All 19 nodes belong to 1 stretched cluster.
All nodes are equal in that they have access to exact same number of datastores, this being 1 local datastore to the host and 1 vSAN stretched cluster datastore.

@depping this is a newish cluster with the majority of VMs in 'production' , so yes, I need to get this right now.

The vSAN stretched cluster was installed and configured by Professional Service, all was good until we had the ISL outage, the only changes since original installation is the witness appliance has been patched.  No change made to the network configuration at vsphere level, our WAN is is managed by our ISP.

Regarding the MOREF ID - not sure where to seek the value for this?

Regarding the kb 80594 you referenced, this appears to be for vsphere 7.0 +, my version is VMware ESXi 6.7.0 build-17098360, and if this change were to be applied would this be a temporary or permanent change?

Reply
0 Kudos
depping
Leadership
Leadership

why would you want to change the master? what is the use case for it? what do you think it improves/changes? Pre 7.x you can add "fdm.nodeGoodness" to "/etc/opt/vmwware/fdm/fdm.cfg" as described in the KB. That should work, but again, I just wonder why you want to do this in the first place?

Cymraes
Enthusiast
Enthusiast

@depping

I was just worried that the master, ordinarily, should be a host on the preferred Fault domain?  sounds I am mistaken and is not the case, then I am happy 🙂

 

thank you for your help

 

 

Reply
0 Kudos
depping
Leadership
Leadership

No it makes no difference. What will happen when you have a site partition is that an election process will automatically happen, and both locations will have a master. You got nothing to worry about 🙂