VMware Cloud Community
Skioron
Contributor
Contributor
Jump to solution

vSAN Config - R730 - 8TB HDD

Hey,

So short story is I'm taking over a small site in the near future which is still in the process of being built by a contractor.

There have been several performance issues so far my research points to the vSAN config.

If anyone's got any advice to make the config below work or just to let me know it's rubbish that would be appreciated.

There are 20 Dell R730 nodes in the cluster, each node has:

     1 x 4 TB Sata SSD

     6 x 8 TB 7.2k RPM HDDs

     PERC H730P RAID Controller

The vSAN is set up with 1 SDD and 6 HDD per disk group.

The supporting network is 40 Gb and is set up properly so I don't think that's a bottleneck, ESXi host networking is set up as per best practice too.

The main workload for the vSAN is ESXi VMs, about a 60/40 read/write load.

All the research I've done tells me this isn't a suitable configuration for vSAN, disks are too large, 7.2K RPM is too slow.

I am very happy to be wrong as the end customer will be unhappy if the vSAN licenses go to waste.

In the mean time I am still searching for issues.

Thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello Skioron​,

Welcome to Communities

"All the research I've done tells me this isn't a suitable configuration for vSAN, disks are too large, 7.2K RPM is too slow."

Not that it isn't suitable - it will work - but it likely won't perform well and depending on the workload, may not suffice.

Is this physical-design still open to change or already purchased and set? (and/or no take-backsies?)

SATA is really really far from ideal, especially for cache-tier in a Hybrid configuration as ideally you want this to be as fast access-speed as possible and with a decent queue-depth - this isn't saying that NVMe-cache would make this cluster fly as the overly-sized and slow capacity-tier would still hold this back.

If this isn't in production and a few nodes could be split off for testing, you could test if using 2x Disk-groups of 3 capacity-tier per host of the current hardware is a significant improvement, if you have some better SSDs available you could also test switching the current ones (e.g. in a 3-node test cluster to minimize hardware requirements).

How much of the workload is intensive and/or business-critical?

You could always consider putting better drives in just a few nodes and having this higher performance cluster handle these VMs while the remainder deals with less resource-intensive VMs.

vSAN Observer and/or the in-built performance graphs in the vSphere Web Client are a good start for comparing metrics but HCI-Bench is preferable:

https://kb.vmware.com/s/article/2064240

Bob

View solution in original post

Reply
0 Kudos
4 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello Skioron​,

Welcome to Communities

"All the research I've done tells me this isn't a suitable configuration for vSAN, disks are too large, 7.2K RPM is too slow."

Not that it isn't suitable - it will work - but it likely won't perform well and depending on the workload, may not suffice.

Is this physical-design still open to change or already purchased and set? (and/or no take-backsies?)

SATA is really really far from ideal, especially for cache-tier in a Hybrid configuration as ideally you want this to be as fast access-speed as possible and with a decent queue-depth - this isn't saying that NVMe-cache would make this cluster fly as the overly-sized and slow capacity-tier would still hold this back.

If this isn't in production and a few nodes could be split off for testing, you could test if using 2x Disk-groups of 3 capacity-tier per host of the current hardware is a significant improvement, if you have some better SSDs available you could also test switching the current ones (e.g. in a 3-node test cluster to minimize hardware requirements).

How much of the workload is intensive and/or business-critical?

You could always consider putting better drives in just a few nodes and having this higher performance cluster handle these VMs while the remainder deals with less resource-intensive VMs.

vSAN Observer and/or the in-built performance graphs in the vSphere Web Client are a good start for comparing metrics but HCI-Bench is preferable:

https://kb.vmware.com/s/article/2064240

Bob

Reply
0 Kudos
HussamRabaya
VMware Employee
VMware Employee
Jump to solution

for same configuration , what will be the suitable SSD disk type and size and why?

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello HussamRabaya

"for same configuration"

Which, OPs or one of the hypothetical ones I mentioned?

"what will be the suitable SSD disk type "

Go SAS if possible, NVMe if it is worth it or necessary.

But this may depend on your requirements and the other components - e.g. sinking a high proportion of budget into only fast(+ relatively expensive) NVMe drives is just going to cause the next slowest thing to be the bottleneck (e.g. 8TB 7.2k drives), so not as much benefit would be gained as for instance spending the same NVMe-money on 2x middle-of-road SAS SSDs.

"and size and why?"

For best performance, size your caching-tier appropriately as per the official guidance:

https://blogs.vmware.com/virtualblocks/2017/01/18/designing-vsan-disk-groups-cache-ratio-revisited/

https://storagehub.vmware.com/t/vmware-vsan/vmware-r-vsan-tm-design-and-sizing-guide/

Bob

Reply
0 Kudos
Skioron
Contributor
Contributor
Jump to solution

Thanks Bob! Sorry for the delayed response.

It's at the no take-backsies point unfortunately, may be able to aquire a decently specc'd NAS which will solve the hybrid performance issue.

Short term, vSAN wise, I'm thinking of creating an All Flash cluster for critical workloads and the NAS for everything else.

Hopefully we can test the performance of All Flash vs the NAS and keep the end customer interested in the tech moving forward.

Skioron

Reply
0 Kudos