VMware Cloud Community
MJMSRI
Enthusiast
Enthusiast
Jump to solution

vSAN Stretched Cluster expected network partition implication

Hi All, 

We have a stretched cluster 3+3+w so two data sites (DC1 & DC2) and a witness site. 

- There is a direct link from DC1 to the Witness Site

- There is a direct link from DC2 to Witness Site

- There is a 10Gb link between DC1 and DC2

 

- The link between DC2 and the Witness Site has failed.

- Now the witness is showing as in 'Group 2' network partition

- The 6 vSAN Hosts at both DC1 and DC2 are in Group 1 

- Ping from hosts vSAN VMkernel in DC1 to Witness is sucesful as that link is working ok. Ping from hosts vSAN VMKernel in DC2 to Witness fails as link down.

- VMs all showing as 'non-compliant' in storage policy. 

 

Question - is the above the expected outcome of that link failure from data site to witness site? i would have thought that as DC1 could still communicate sucesfull to and from the Witness site the witness appliance would still be in Group 1 and VMs compliant and simply show a warning that one of the datasites has an issue?

0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

@MJMSRI, Yes, this is expected behaviour as both Master node (on one site) and Backup node (on the other site) need to both be able to reach the Witness for it to be accepted as being part of the cluster.


Keeping both copies of the data active here (as opposed to one site being marked as failed and the remaining site siding with the Witness) is the better option - if it marked that site as failed then anything stored local-only to that site wouldn't be available. Also this means we retain current state of the data replicas and don't have an unnecessary resync after the issue is resolved (and also it would be a more viable state for GSS to potentially help repair data from if another failure occurred while in this state).

View solution in original post

1 Reply
TheBobkin
Champion
Champion
Jump to solution

@MJMSRI, Yes, this is expected behaviour as both Master node (on one site) and Backup node (on the other site) need to both be able to reach the Witness for it to be accepted as being part of the cluster.


Keeping both copies of the data active here (as opposed to one site being marked as failed and the remaining site siding with the Witness) is the better option - if it marked that site as failed then anything stored local-only to that site wouldn't be available. Also this means we retain current state of the data replicas and don't have an unnecessary resync after the issue is resolved (and also it would be a more viable state for GSS to potentially help repair data from if another failure occurred while in this state).