Hi to all,
we're evaluating a small 2-node 6.7 Vsan stretched deployment where nodes have the following disk configuration each: 2 disk groups with (1x400GB ssd cache disk + 4x1,92TB ssd data disk) each
Assuming a correct solution deplyment with witness appliance on third site we're triyng to figure out the behavior under following failure scenarios:
when quorum is down/ureacheable by both nodes due to maintenace/hardware failure what happen to virtual machines if:
when one node is down for maintenance/hardware failure what happen to virtual machines if:
having two disk group per node instead of one will increase data redundancy?
does used disk capacity influence the solution recovery capability ? (i mean: using 30% of usable space is it different from using 80%?)
is it correct to consider 14TB of usable disk space in this configuration?
tank you
Most of your questions are answered in the stretched cluster guide: https://storagehub.vmware.com/t/vmware-vsan/vsan-stretched-cluster-guide/
Anyway, when both lose access to the Witness and the described issues occur:
- we loose one cache disk on one node?
All VMs on the associated disk group will become unavailable, as now you have 2 failures (witness and disk group)
- we lose one data disk on one node?
All VMs on the disk will become unavailable, as now you have 2 failures (witness and diskgroup)
when one node is down for maintenance/hardware failure what happen to virtual machines if: we loose one cache disk on the running node?
All VMs on the diskgroup associated with the cache disk will become unavailable, again 2 out of 3 objects are missing
when one node is down for maintenance/hardware failure what happen to virtual machines if: we lose one data disk on the running node?
Same, as all above. 2 out of 3 components of the object will be gone. ALL VMs which are impacted will end up being unavailable.
having two disk group per node instead of one will increase data redundancy?
Yes, this will lower the risk, as you have a distribution of VMs across disk groups
Yes you will have around 14TB usable with RAID-1. And recoverability doesn' change, but the fuller the disks are, the longer it will take to recreate all objects on those disks...
---------------------------------------------------------------------------------------------------------
Was it helpful? Let us know by completing this short survey here.
Most of your questions are answered in the stretched cluster guide: https://storagehub.vmware.com/t/vmware-vsan/vsan-stretched-cluster-guide/
Anyway, when both lose access to the Witness and the described issues occur:
- we loose one cache disk on one node?
All VMs on the associated disk group will become unavailable, as now you have 2 failures (witness and disk group)
- we lose one data disk on one node?
All VMs on the disk will become unavailable, as now you have 2 failures (witness and diskgroup)
when one node is down for maintenance/hardware failure what happen to virtual machines if: we loose one cache disk on the running node?
All VMs on the diskgroup associated with the cache disk will become unavailable, again 2 out of 3 objects are missing
when one node is down for maintenance/hardware failure what happen to virtual machines if: we lose one data disk on the running node?
Same, as all above. 2 out of 3 components of the object will be gone. ALL VMs which are impacted will end up being unavailable.
having two disk group per node instead of one will increase data redundancy?
Yes, this will lower the risk, as you have a distribution of VMs across disk groups
Yes you will have around 14TB usable with RAID-1. And recoverability doesn' change, but the fuller the disks are, the longer it will take to recreate all objects on those disks...
---------------------------------------------------------------------------------------------------------
Was it helpful? Let us know by completing this short survey here.
Thank you Duncan