- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
VSAN recommended cache size for VSAN 8-OSA
Hi All,
I'm currently evaluating the performance with VSAN 8. All flash cluster
3 node cluster
Per Node - 12, * 3.64 TB NVME PCI-E disks are available. All these disks are read-intensive disks.
Total 36 NVME PCI-E disks * 3 nodes.
I have tested with 3 disk groups(per node) with the same 36 NVME disks. I understand that we don't need to use 3.64 TB disks for the cache tier(there will be a performance hit), however, I used and test the overall performance. Its ok now.
As I said earlier,all 36 NVME disks are read intensive disks,so i like to move on our testing with write intensive disks for cache tier.
What capacity Can I recommend for cache tier ? either it is 600 GB or 1.6 TB disks?
Note->We are going to purchase new drives for cache tier only and like to test with the same 36 drives for capacity tier.
VSAN document.
vSAN cache tier capacity is capped at 600GB currently.
Starting with vSphere 8.0, vSAN supports higher cache tier capacities, up to 1.6TB. However, this is not enabled by default. By default, any new disk groups getting created will still use cache tier capacity of only up to the existing limit of 600GB.
Bobkin message is useful.
1. Yes it is necessary to use a whole, unpartitioned, All-Flash Cache-tier certified device as Cache-tier (advisable to validate they are on the vSAN HCL and certified for that purpose before purchasing anything). There is no Disk-Group without exactly one 1 Cache-tier SSD/NVMe + 1-7 Capacity-tier devices.
2. No, a whole device is needed - using an 8TB device for this is not a good use of resources (as write-buffer in current versions of vSAN will only actively use max 600GB), better off using something smaller and faster (e.g. a 600-800GB write-intensive NVMe over a possibly worse performing 4-8TB read-intensive SSD/NVMe (also, bear in mind when I say "worse performing" I mean like for like e.g. a 8TB device using only 600GB isn't going to get the full device performance)).
3. Because this is how vSAN architecture has been designed. The intention is to use smaller, relatively faster devices as Cache-tier and then larger, less write-intensive devices as Capacity-tier.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would go for the largest which fits your budget, if you can afford 1.6TB then I would do that instead of 600GB, as you simply have more capacity to store cached data before it needs t be destaged.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks depping for your feedback.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
There is a RAM penalty of 5GB per disk group for enabling larger cache, and VMware also says that the benefits may not be realized unless the larger cache is enabled/provisioned with larger drives across the cluster. The RAM penalty can add up fast -- even 5 hosts with 2 disk groups each is going to cost you 50GB in RAM for this feature.
https://kb.vmware.com/s/article/89485
While this feature appears to increase performance, I wonder if they could have another mode that just spread the wear across the disk without the RAM penalty. That would be nice. If the extra RAM is just needed to keep track of the larger cache, maybe it would not be needed if they kept that at 600gb no matter what size disk