VMware Cloud Community
vRocky
Contributor
Contributor

VMs on SATA RAIDs

I've been monitoring my IOPs needs as I'm planning some environment changes and just wanted to see if anybody has any experiences with utilizing 7200RPM SATA drives for VMs.

These VMs would be not be geared towards performance - this is for QA related testing primarily.  There would be a few SQL servers, file server and then 35+ misc VMs.

12 x 1TB 7200 RPM SATA drives in a RAID6.  I'd split the LUN up into multiple datastores.

My monitoring is being done via batch esxtop jobs where I'm focusing on the VM reads/sec and writes/sec.

My reads/sec are bursting up to ~400/sec for a handful of servers.  This is not constant, they average at 1 or less for ~8 hour captures.  Writes only burst up to 50/sec for those same servers - again only sporadically.

The rest (~25 VMs of various roles) don't burst either way over 30reads or writes /sec.

This would all be connected over FC to multiple hosts running ESXi 5.1.  The SAN is a few years old but it does have 1gb cache in each controller.  No VAAI or SIOC.

If you have gotten this far - do you think that this spindle count will allow me to reliably run 40+ vms ?  For what its worth, I can honestly see ~5TB going to 3 SQL servers and 1 File server alone, and then the rest would be small VMs used in testing.  I don't need this to scream, I just want to feel good about having these systems operational.

0 Kudos
3 Replies
chriswahl
Virtuoso
Virtuoso

This is a pretty good post on how to understand RAID penalty.

http://theithollow.com/2012/03/21/understanding-raid-penalty/

You are pushing the envelope on 12 SATA disks. I usually calculate 80 IOPS per SATA disk, giving you 800 raw IOPS (as you lose 2 to parity with RAID 6). That's only 20 raw IOPS per VM at 40 VMs.

If you had a 400 IOPS read spike and that 50 IOPS write spike near each other, that's about 700 IOPS consumed. If your cache eats up some of those reads, it won't be so bad.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
mikeyb79
Enthusiast
Enthusiast

I would also be concerned with latency as you approach maximum IOPS. As you drive the array closer to its maximum, latency will increase significantly which will not be a popular thing with your SQL servers. You already don't have much headroom on IOPS, 20/vm isn't a lot (assuming RAID-6 with no hot spares). I've seen backup jobs pin small arrays and drive latency to unacceptible levels as far as SQL is concerned. Caching helps to a small degree, but I would suggest benchmarking the storage array itself to see what it can really handle at the limit as far as max IO and latency using real world workload data.

vRocky
Contributor
Contributor

Thanks for the replies.

This data would not be backed up often - but regardless I feel like it would be pretty easy to make performance sluggish.

I was hoping to see if anybody had any direct experience with utilizing a similar configuration.

If I'm forced to go down this route, I will plan to have space elsewhere if the SQL servers end up being greedy with the limited IOPs.

0 Kudos