ssSFrankSss
Enthusiast
Enthusiast

Basic questions for vSAN size disk selection

Jump to solution

Hi,

1) For all flash vSAN the cache tier should be 10% is this right? For example if 3TB per host is enough then you would need 300GB SSD for cache am I right?

2) For storage tier suppose we said 3TB is the right size per host. Would 4x750GB SSDs offer double the speed (read,write) than 2x1500GB SSDs same brand?

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
TheBobkin
VMware Employee
VMware Employee

Hello Frank,

If the expected/known workload is ~30/70 Write/Read then size the cache-tier accordingly to the expected/known IOPS and bear in mind the IOPS size as you can see from that first link those sizing per IOPS recommendation are based on relatively small IOPS size (4KB), this https://labs.vmware.com/flings/ioinsight can help you get more insight (pun intended :smileygrin:) into this.

"vSAN with as much as you can SSDs would be the best all around solution right?"

If you want to get the most performance out of vSAN then multiple relatively smaller DGs (but not too small and not with too small disks) is the way to go.

Everything is a trade-off as I said in last comment, if you use double the amount of 1/2 size drives then you gain in one area but potentially lose out in others (e.g. recoverability), it can be a tight balancing act but from experience with sizing I reckon ~1-1.5TB capacity drives is the sweet-spot for All-flash (but other things like number of nodes and number of DGs per node and number of capacity-drives per DG can affect this).

Bob

View solution in original post

0 Kudos
3 Replies
TheBobkin
VMware Employee
VMware Employee

Hello Frank,

1) Actually the sizing recommendation for All-flash is different than for Hybrid and works more on the workload than the capacity ratio:

https://blogs.vmware.com/virtualblocks/2017/01/18/designing-vsan-disk-groups-cache-ratio-revisited/

2) More disks is preferable, especially on All-flash as the caching-tier is used for 100% write cache and reads come from the capacity-tier (as opposed to 70/30 Write/Read cache in Hybrid), if this is the main bottleneck then yes this may increase the performance (not double though, adding these to a second DG would get closer to this).

If you can spring for a larger cache device than 300GB I would say this would make more than of an improvement than going 4x capacity-tier than 2x.

This does however have other potential impacts, 750GB is a bit on the small side, vSAN (LSOM-)components are stored as 255GB max-size chunks per disk and while 4x disks per Disk-Group may see more benefit for striping and resiliency-factor (as losing 1 disk means losing 1/4 capacity as opposed to 1/2 with 2x) it may be more awkward if the workload consists of relatively large vmdks (500GB-1TB+).

Avoid SATA devices throughout.

A decent amount of further info here:

https://storagehub.vmware.com/export.../vmware-r-vsan-tm-design-and-sizing-guide

Bob

0 Kudos
ssSFrankSss
Enthusiast
Enthusiast

OK so it seems like that a 30%write/70%read [cache/storage] percent in all vSAN with as much as you can SSDs would be the best all around solution right? I just want to make sure that we understand the same Smiley Happy

0 Kudos
TheBobkin
VMware Employee
VMware Employee

Hello Frank,

If the expected/known workload is ~30/70 Write/Read then size the cache-tier accordingly to the expected/known IOPS and bear in mind the IOPS size as you can see from that first link those sizing per IOPS recommendation are based on relatively small IOPS size (4KB), this https://labs.vmware.com/flings/ioinsight can help you get more insight (pun intended :smileygrin:) into this.

"vSAN with as much as you can SSDs would be the best all around solution right?"

If you want to get the most performance out of vSAN then multiple relatively smaller DGs (but not too small and not with too small disks) is the way to go.

Everything is a trade-off as I said in last comment, if you use double the amount of 1/2 size drives then you gain in one area but potentially lose out in others (e.g. recoverability), it can be a tight balancing act but from experience with sizing I reckon ~1-1.5TB capacity drives is the sweet-spot for All-flash (but other things like number of nodes and number of DGs per node and number of capacity-drives per DG can affect this).

Bob

View solution in original post

0 Kudos