I wanted to share my experiences with SDRS, how I wanted to use it, how it faired so far and of course allow anyone to point out to me anything that I probably missed.
First off, in our environment, we use mulit-vmdk VMs. A VM will have a base OS VMDK and then, depending on the type of server, it will have more VMDK disks attached as required. This has worked well in our environment and we aren't looking to change anytime soon. So, our typical ha cluster will have multiple OS LUNs and DATA LUNs. We even have a few replicated OS LUNs and replicated DATA LUNs for the VMs requiring DR.
How I envisioned SDRS working for us
We are lucky in the fact that we don't have tier'd storage. As far as we are concerned, everything is tier 1. We think in terms of "DR or not DR". So I thought we could use DRS like this:
SDRS_OS_CLUSTER
SDRS_DATA_CLUSTER
SDRS_OS_CLUSTER_DR
SDRS_DATA_CLUSTER_DR
Since I'm not a huge fan of my VMDK's moving without my say-so...yet... (though i'm sure there will be a break-out session at VMWorld 2012 about how SDRS is smarter than me as they did with DRS) my plan was to set SDRS on each of the datastore clusters to manual. I did this in hopes of allowing SDRS to manage my initial placement of VMs to help balance our datastore usage without the use of a spreadsheet.
So with the above configuration we were hoping to accomplish the following:
How SDRS actually worked for us
SDRS has been found to not be friendly to multi-vmdk VMs. When you create a VM...or move a VMDK to a SDRS cluster, it will automatically assume that you want to keep the VMDK files together. This means that if you said to move all the DATA VMDKs to the SDRS cluster, SDRS will try to put them all on the same datastore. This is problematic because it requires a manual unchecking of that "Keep VMDKs together" setting in the SDRS clusters' settings...or so we thought.
Even while unchecking the box, any future .VMDK move to the SDRS cluster will go to the same datastore. There is an option to create an anti-affinity rule to make sure the .vmdk files are on separate datastores. That just doesn't work at all either. So, if you have a VM that has multiple drives and the sum of which is larger than the most free space on a datastore, it will not allow a storage vmotion due to space issues. In order for this to work, when performing a storage vmotion, you have to check the "disable storage vmotion for this machine" box in order to manually select the datastores. Now the kicker is once the disks are moved and everything is spread across datastores...exactly what the SDRS anti-affinity rule should have done, you get a fault in SDRS that it could not fix the anti-affinity rule violation...BECAUSE STORAGE DRS IS DISABLED ON THE VM!!!!
So in summation, with one VMDK this works great. With one VMDK per SDRS cluster it also works great. With multiple VMDK's per cluster and you want all your VMs on one datastore...and can fit them one datastore it works great. If you want multiple VMDKs per VM per cluster spread across you datastores...it leaves a lot to be desired. Lets hope changes are in the works.
I'd like to hear how others are using it.
If you want multiple VMDKs per VM per cluster spread across you datastores...it leaves a lot to be desired.
Appreciate your deep-dive on Storage DRS. I am also using Storage DRS in my vLab. But I have not spread VM's VMDKs on different cluster. I kept all my eggs in one basket. So I never came across to dive into the issue, you faced.
Lengthiest and informational post ! ! !