VMware Cloud Community
Akide
Enthusiast
Enthusiast

issue with VSAN SSD cache configuration.

Hi, vmware community,

There is a problem with my current VSAN cluster, my deployment are showed below:

6.4TB HHD and 2x200GB SSD with RAID 0 for cache,

The read speed is up to 1.2Gb/s which is fit the SSD's top speed,

but the write speed is just 200-250MB/s which is very low to SSD's top speed.

By another hand, when i use vMotion move VMs from one host to VSAN database, there will shows a error which is said 'ssd congestion'.

Due to the read speed is great and write speed is going down I guess this issue probably related to the host's raid configuration, do you have any suggestion for use wirte-back or wirte-through mode for SSD cache?

Thanks

Tags (2)
0 Kudos
4 Replies
TheBobkin
Champion
Champion

Hello Akide​,

"2x200GB SSD with RAID 0 for cache"

Do NOT RAID multiple devices together before presenting them as vSAN capacity/cache-tier, it is not intended for this type of usage.

The only time any form of RAID is applied at this level is individual RAID0 VDs per disk on controllers that support this.

Aside from this though, large amounts of SSD congestion are usually the result of using hardware that is not on the HCL and/or design not fit for the intended purpose, e.g. if you put only a single large, slow HDD as capacity-tier behind a small SSD for cache-tier it is not going to be able to receive the destaged data fast enough and thus the SSD essentially has to be throttled (congestion) so it doesn't over-burden it - essentially you are for some reason expecting the write-performance of the SSD here where you are more likely to be getting the performance limits of the HDD as this is the bottle-neck.

In realistic set-ups, no-one is putting a single (large) HDD behind a SSD as this will not make the full benefit of caching, multiple HDDs (4-7) + larger cache is more normal.

Bob

0 Kudos
Akide
Enthusiast
Enthusiast

Thanks so much for your suggestion and explanation, i used RAID 5 group for 7 HDDs sorry about i didn't mentioned before.

The RAID group through the speed test is likely over 500MB/s.

according to your explanation this issue should be related to the cache ssds are not big enough.

0 Kudos
TheBobkin
Champion
Champion

Hello Akide​,

Don't apply any RAID configuration before exposing devices for placement of vSAN system partitions, the point of vSAn is to offer protection of dat Objects (e.g. vmdk/vswp) at the system-level via SPBM, not at the hardware-level - any application of RAID at hardware-level is only going to cause complications.

If you have 2xSSD and 7xHDD then you would get best throughput by configuring 2 Disk-Groups (yes, they will be uneven capacity here but better than how you have it now). As I said above, bigger cache will only help to a point and depending on circumstances but if the receiving end of the workload (HDDs here) are not able to keep up then throttling will occur.

Bob

0 Kudos
aNinjaneer
Contributor
Contributor

Using RAID on the underlying devices defeats the purpose of Software Defined Storage. If you're just trying to get more performance, simply split the drives into more disk groups. There are a few things to consider, here. The first is that you've added additional layers into the storage stack that each add their own latencies. The second is that vSAN has to write copies of data across the network, so if you're using slow networking (i.e. 1GbE), you will see increased latency for writes. Lastly, with such small cache drives, you have to consider the usage. If your drive is operating at 100% full, your performance will suffer with an SSD because of garbage collection. The first step is removing the RAID, presenting the raw block devices to vSAN, and creating two disk groups, each with half of the drives. From there, you can look at the other points to see where your bottleneck is. "SSD Congestion" means your capacity tier is too slow to keep up with the cache tier. All of your problems may be alleviated simply from removing the RAID devices, putting your controller in pass-through/HBA mode and presenting the raw block devices to ESXi directly.

0 Kudos