hi,
i have the below scenario
-vsan cluster of 4 hosts (ESXi 6.7U1)
-2 switches dedicated for vSAN and isolated from other networks
- vDS with 2 nics: vsan traffic and vmotion traffic
-2 switches for every thing else
- vDS with 2 nics: VM traffic, ESXi mgt, ....
-isolation ip address is kept to its default
- test VMs are created
problem:
when i unplug 2 network cables of the ports connected to vDS for VM traffic (host is still powered on)
-> no host isolation is triggered -> no VMs are restarted on another server.
But when the 2 network cables of the ports connected to vSAN vDS are unplugged
->host isolation is triggered -> VMs are restarted.
Question:
what should be done so in both cases host isolation is triggered ?
should i put 2 isolation addresses on HA advanced settings?
thanks in advance.
VMware has changed the HA design if you use vSAN. And with a network design where Mgmt/VM traffic and vSAN traffic runs over 2 separate switch infrastructures, there will always be a case where HA is not triggered. The HA case is only triggered when the vmkernel port where vSAN is enabled can no longer communicate with the other hosts.
See here:
vSphere HA heartbeat datastores, the isolation address and vSAN - Yellow Bricks
And the case with a splitted physical network setup was already discussed in an older thread: Managment network failure on vSAN does not trigger isolation response, so VMs are left in a totally ...
In short:
You have 2 options. Combine everything on one dvSwitch and use the same physical switch infrastructure for all dvs uplinks. Or wait until VMware changes its vSAN HA design.
VMware has changed the HA design if you use vSAN. And with a network design where Mgmt/VM traffic and vSAN traffic runs over 2 separate switch infrastructures, there will always be a case where HA is not triggered. The HA case is only triggered when the vmkernel port where vSAN is enabled can no longer communicate with the other hosts.
See here:
vSphere HA heartbeat datastores, the isolation address and vSAN - Yellow Bricks
And the case with a splitted physical network setup was already discussed in an older thread: Managment network failure on vSAN does not trigger isolation response, so VMs are left in a totally ...
In short:
You have 2 options. Combine everything on one dvSwitch and use the same physical switch infrastructure for all dvs uplinks. Or wait until VMware changes its vSAN HA design.