VMware Cloud Community
andvm
Hot Shot
Hot Shot

Slow vSAN (Nested setup)

Hi,

I have an e200-8d with a nested ESXi setup and configured each of the 3 ESXi hosts with one disk (M.2) and another disk (2.5 SATA HDD 5400rpm).

I configured vSAN to use the M.2 disk for cache and the SATA for capacity on each of the hosts but it is working extremely slow to transfer data to the vsandatastore

I ran the proactive Tests and IOPS were just at 4 and a data transfer to the vsandatastore was just at around 300KB/s

I know this is not officially supported but as an experiment to test vSAN I thought I would get much better performance since I have the M.2 disk as cache?

Thanks

Tags (1)
0 Kudos
2 Replies
RajeevVCP4
Expert
Expert

This is issue in nested setup,

Rajeev Chauhan
VCIX-DCV6.5/VSAN/VXRAIL
Please mark help full or correct if my answer is use full for you
0 Kudos
TheBobkin
Champion
Champion

Hello andvm

What are the sizes of the cache and capacity disks (and quantity) on the hosts?

Potentially not set up correctly, or the devices in use just aren't capable (and/or don't play well with nested vSAN) - having just one component of X-performance rating doesn't mean the cluster will be as fast as X, you have to consider the other components too and how these can be bottlenecks which limits the capability of the faster components. What kind of stats do you get on CrystalDiskMark (or similar) for the M.2 device and the SATA HDD?

I am assuming you are on an older build of vSAN/ESXi due to the Proactive Performance test being available. Not sure which version they got added/improved in, but (if present) what do the outputs look like in the Performance graphs (Cluster > Monitor > Performance > vSAN - Backend) while writing some actual data (e.g. cloning a vmdk that has at least a few GB of written data)? For comparison, cloning a few VMs at once on my nested setup (3-node cache backed by Samsung 960 EVO M.2 NVMe, capacity backed by cheap Samsung SATA SSD) starts maxing out at ~550 IOPS ~30.0 MB/s.

RajeevVCP4: "This is issue in nested setup" - OP kind of stated this in the title.

If you know of some specific issue whereby proactive performance test gives a specific (e.g. 4 iops) or particularly low result in nested environments then please do share some specific knowledge or documentation with regard to this. If you mean to say that nested set-ups aren't capable of more than 4 IOPS then this is simply untrue.

Bob

0 Kudos