VMware Cloud Community
eduruizblas
Enthusiast
Enthusiast
Jump to solution

question of all flash config

good! I am thinking of setting up a stretched cluster of 4 nodes all flash and I have doubts about the maximum size of the cache disk.
The idea is to mount a disk group per node with 960 GB of cache and 5 disks of 1.92 Tb SSD.

Would this configuration be possible?

regards

1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello eduruizblas

While the configuration you outlined is okay, whether it is optimal (or adequate) for the workload intended to run on it is another story. A couple of things to note would be that the max that will actually actively be used for Write-cache in an All-Flash configuration is 600GB per Disk-Group - anything more than that is used for extending the lifespan of the device, if you are using modern NVMe (e.g. P4800x) these have an incredibly high TBW anyway so I wouldn't consider spending the extra money on a 960GB over a 600GB model offering much benefit.

Designing vSAN Disk groups - All Flash Cache Ratio Update - Virtual Blocks

What would almost certainly benefit more would be buying 2 cache-tier devices (e.g. 2x400GB or 2x600GB) and configuring 2 smaller Disk-Groups instead of one large one - note that the recommendations in the article above are based on 2x Disk-Groups as this outperforms a single Disk-group of equivalent size and cache:capacity ratio. Going with 2 Disk-Groups may also allow for ease of expansion at a later point. Do note as my colleague Hareesh mentioned, do pay attention to device stats, there can be a massive difference in performance between devices which are both "NVMe" or "SSD", this is merely a device format and doesn't automatically indicate the performance capabilities (and what type of workload it is optimised for), e.g. better HDDs may outperform lower-end SSDs and higher-end SSDs may outperform lower-end NVMe etc.

If you want to configure a 2+2+1 cluster you will need Enterprise vSAN licensing regardless of whether they are 100 metres or 100km from one another.

Bob

---------------------------------------------------------------------------------------------------------

Was it helpful? Let us know by completing this short survey here.

View solution in original post

Reply
0 Kudos
6 Replies
hkg2581
VMware Employee
VMware Employee
Jump to solution

Hello @eduruizblas

Yes The configuration that you are planning for 1x 960GB Cache with 5x 1.92TB capacity looks good . Please make sure you understand the type of workload you are about to run and make choice on the CLASS of drive matters . We have drive class from Class-A (30000 IOPS) capability to Class-F (100,000 IOPS) Capability , also the protocol 3D-xPoint , NVMe , SAS ..etc . Make sure you choose the right hardware for the types of application you are about to run .

Example : You can have all NVMe Setup , NVMe Cache with SAS capacity setup ,  all SAS setup , NVMe Cache with Sata capacity etc .

Regards,

Hareesh K G

Thanks, Hareesh K G Personal Blog : http://virtuallysensei.com
eduruizblas
Enthusiast
Enthusiast
Jump to solution

First of all, thanks for the reply.
The idea is to mount nvme for cache disks and mixed use sas for capacity disks.


Another question do I necessarily need the enterprise license for this config? I don't need to activate compression and dedup.

The configuration would be as follows:
2 nodes in one room and another 2 in a different one, with a third quorum room. The distance between both rooms does not exceed 100 meters but I need fault tolerance of a complete room

Reply
0 Kudos
HassanAlKak88
Expert
Expert
Jump to solution

Hello,

Based on your configuration, it will be a stretched cluster. and as per the below you need the enterprise license:

pastedImage_0.png

and for more info please find the VSAN license guide: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-vsan-66-licen...


If my reply was helpful, I kindly ask you to like it and mark it as a solution

Regards,
Hassan Alkak
Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello eduruizblas

While the configuration you outlined is okay, whether it is optimal (or adequate) for the workload intended to run on it is another story. A couple of things to note would be that the max that will actually actively be used for Write-cache in an All-Flash configuration is 600GB per Disk-Group - anything more than that is used for extending the lifespan of the device, if you are using modern NVMe (e.g. P4800x) these have an incredibly high TBW anyway so I wouldn't consider spending the extra money on a 960GB over a 600GB model offering much benefit.

Designing vSAN Disk groups - All Flash Cache Ratio Update - Virtual Blocks

What would almost certainly benefit more would be buying 2 cache-tier devices (e.g. 2x400GB or 2x600GB) and configuring 2 smaller Disk-Groups instead of one large one - note that the recommendations in the article above are based on 2x Disk-Groups as this outperforms a single Disk-group of equivalent size and cache:capacity ratio. Going with 2 Disk-Groups may also allow for ease of expansion at a later point. Do note as my colleague Hareesh mentioned, do pay attention to device stats, there can be a massive difference in performance between devices which are both "NVMe" or "SSD", this is merely a device format and doesn't automatically indicate the performance capabilities (and what type of workload it is optimised for), e.g. better HDDs may outperform lower-end SSDs and higher-end SSDs may outperform lower-end NVMe etc.

If you want to configure a 2+2+1 cluster you will need Enterprise vSAN licensing regardless of whether they are 100 metres or 100km from one another.

Bob

---------------------------------------------------------------------------------------------------------

Was it helpful? Let us know by completing this short survey here.

Reply
0 Kudos
eduruizblas
Enthusiast
Enthusiast
Jump to solution

thanks for the reply In case of using a configuration of two vsan cluster 1 + 1 + 1 and 1 + 1 + 1; being one of them all flas and another hybrid. (for saving the project) could you use the standard licenses for this design? regards

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

As per the licensing documentation Standard license should be fine (whether properly stretched across sites or in same building):

Scenario 13: 1 host + 1 host+ witness (2-node cluster) across 3 rooms in the same building with maximum 25 VMs This scenario requires either 1 vSAN STD for ROBO (per-VM 25-pack) or 4 vSAN STD per CPU licenses.

Scenario 14: 1 host + 1 host + witness (2-node cluster) across physical sites with maximum 25 VMs This scenario requires either 1 vSAN STD for ROBO (per-VM 25-pack) or 4 vSAN STD per CPU licenses. Customer does not require ENT licenses for a 2-node cluster stretched across 2 physical site.

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vsan/vmware-vsan-67-licen...

Bob