I was toying with some storage design models and considered starting with 3 datastore clusters each with 3 gold tier datastores (3X3 grid below). I'd end up doing similar with the other storage tiers. The idea was to have VMs disks spread out across datastores. So if a VM had 3 disks, each would be placed on Gold-Datastore-Cluster-1, 2, and 3 respectively. I didn't do any deep research on this but, I assumed that since datastores have queues like anything else, spreading out a VMs VMDKs on different datastores would provide a "bandwidth" increase. Is this logical?
NOTE: I know I wouldn't need datastore clusters to accomplish this but, separating across clusters ensures SDRS wouldn't automatically move VMDKs to the same datastore.
Couple of points to consider;
If the datastores are all on the same physical disks, there won't be any performance gain from the physical disk and that's the slowest part of the design.
You could (depending on hardware) gain an additional LUN queue to queue requests.
You could gain performance by ensuring multipathing (depending on hardware) even without splitting the disks.
This could make sense if one disk had a high performance (tier) need and one had a very low performance (tier) need.
A lot of this depends on your SAN hardware and design. If you've got an enterprise SAN with tiering and SSDs you probably don't need all this.
It also depends on how large your environment is. Personally, in a large environment, I wouldn't want to try to keep track of where split disks all reside.
Jonathan:smileymischief:
Keeping vmdk in different datastore, One positive point would be lock contention issue for single vm would be less. It may help for performance enhancement.
Couple of points to consider;
If the datastores are all on the same physical disks, there won't be any performance gain from the physical disk and that's the slowest part of the design.
You could (depending on hardware) gain an additional LUN queue to queue requests.
You could gain performance by ensuring multipathing (depending on hardware) even without splitting the disks.
This could make sense if one disk had a high performance (tier) need and one had a very low performance (tier) need.
A lot of this depends on your SAN hardware and design. If you've got an enterprise SAN with tiering and SSDs you probably don't need all this.
It also depends on how large your environment is. Personally, in a large environment, I wouldn't want to try to keep track of where split disks all reside.
Jonathan:smileymischief:
Thanks for the replies. The benefits to this design may not be sufficient in my current environment. I could see a potential performance increase if the datastore clusters accessed different physical storage frames/arrays but, that won't be the case here. The more I think about it the more it seems any VM performace benefit strictly from a datastore queue perspective would be inconsistent and insignificant (at least in generalized terms).
One big advantage of datastore clusters is that vCenter / Storage DRS will monitor the capacity and the I/O latency of the datastores that are placed in a datastore cluster. When provisioning a VM, just point it towards the datastore cluster. If all the datastores are being provided by the same storage array, there is no reason to create that many different datastore clusters. Keep in mind that it's just a basic form of administration and abstraction. vCenter / Storage DRS will monitor the information for you, and when you create a VM(DK) it will automatically pick the datastore with the most free capacity and / or the least I/O latency.
Another advantage of using datastore clusters (assuming you already have the required license level; Enterprise Plus) is that SIOC (Storage I/O control) is enabled by default. SIOC is a feature that allows for great things, if you're open for it and you understand when it can work with and when it can work against you. Basically, it "caps" VM's that are overloading the datastore with I/O requests. With SOIC enabled, one VM cannot consume that much bandwidth that other VM storage traffic will suffer from it.
See a video about this here: Storage IO Control - SIOC - YouTube
I'd go with one Gold-Datastore-Cluster and if you want to spread VMDK's across datastores, just configure Storage DRS VMDK Anti affinity (much like "regular") DRS.
Yes. I see that I was overthinking or underthinking this. One datastore cluster per tier seems the logical, correct approach with one back end array.
However, if you had multiple arrays, it seems like there would be benefit to create a"Gold Cluster" for each array. That way you could ensure VMDK's were not on the same array (for backups, Load Balanced VMs, etc). Perhaps there is a feature in 5.1 or 5.5 that allows for this configuration within vCenter but I don't see it in 5.0.