Yes The configuration that you are planning for 1x 960GB Cache with 5x 1.92TB capacity looks good . Please make sure you understand the type of workload you are about to run and make choice on the CLASS of drive matters . We have drive class from Class-A (30000 IOPS) capability to Class-F (100,000 IOPS) Capability , also the protocol 3D-xPoint , NVMe , SAS ..etc . Make sure you choose the right hardware for the types of application you are about to run .
Example : You can have all NVMe Setup , NVMe Cache with SAS capacity setup , all SAS setup , NVMe Cache with Sata capacity etc .
Hareesh K G
First of all, thanks for the reply.
The idea is to mount nvme for cache disks and mixed use sas for capacity disks.
Another question do I necessarily need the enterprise license for this config? I don't need to activate compression and dedup.
The configuration would be as follows:
2 nodes in one room and another 2 in a different one, with a third quorum room. The distance between both rooms does not exceed 100 meters but I need fault tolerance of a complete room
Based on your configuration, it will be a stretched cluster. and as per the below you need the enterprise license:
and for more info please find the VSAN license guide: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-vsan-66-licensing-guide.pdf
While the configuration you outlined is okay, whether it is optimal (or adequate) for the workload intended to run on it is another story. A couple of things to note would be that the max that will actually actively be used for Write-cache in an All-Flash configuration is 600GB per Disk-Group - anything more than that is used for extending the lifespan of the device, if you are using modern NVMe (e.g. P4800x) these have an incredibly high TBW anyway so I wouldn't consider spending the extra money on a 960GB over a 600GB model offering much benefit.
What would almost certainly benefit more would be buying 2 cache-tier devices (e.g. 2x400GB or 2x600GB) and configuring 2 smaller Disk-Groups instead of one large one - note that the recommendations in the article above are based on 2x Disk-Groups as this outperforms a single Disk-group of equivalent size and cache:capacity ratio. Going with 2 Disk-Groups may also allow for ease of expansion at a later point. Do note as my colleague Hareesh mentioned, do pay attention to device stats, there can be a massive difference in performance between devices which are both "NVMe" or "SSD", this is merely a device format and doesn't automatically indicate the performance capabilities (and what type of workload it is optimised for), e.g. better HDDs may outperform lower-end SSDs and higher-end SSDs may outperform lower-end NVMe etc.
If you want to configure a 2+2+1 cluster you will need Enterprise vSAN licensing regardless of whether they are 100 metres or 100km from one another.
Was it helpful? Let us know by completing this short survey here.
thanks for the reply In case of using a configuration of two vsan cluster 1 + 1 + 1 and 1 + 1 + 1; being one of them all flas and another hybrid. (for saving the project) could you use the standard licenses for this design? regards
As per the licensing documentation Standard license should be fine (whether properly stretched across sites or in same building):
Scenario 13: 1 host + 1 host+ witness (2-node cluster) across 3 rooms in the same building with maximum 25 VMs This scenario requires either 1 vSAN STD for ROBO (per-VM 25-pack) or 4 vSAN STD per CPU licenses.
Scenario 14: 1 host + 1 host + witness (2-node cluster) across physical sites with maximum 25 VMs This scenario requires either 1 vSAN STD for ROBO (per-VM 25-pack) or 4 vSAN STD per CPU licenses. Customer does not require ENT licenses for a 2-node cluster stretched across 2 physical site.