netwrkmgr
Contributor
Contributor

ESXi 5 Storage DRS

Jump to solution

We are implementing Storage DRS for a two host vSphere 5 (HP Image 5.0.0 623860). This is connecting to a P2000 FC MSA. To conserve space we were considering thin-provisoning the OS C: vmdks. Research indicates there may be issues with space reclaimation. Are there any present issues with VMWare reclaiming space on thin VMs? Will DRS move the vms properly if they are thin and reclaim the space that was being used? Are there any pitfalls to be aware of (besides monitoring the space and setting alarms)?

Many thanks in advance for any assistance.

0 Kudos
1 Solution

Accepted Solutions
EdWilts
Expert
Expert

Although think provisioning is officially supported, it doesn't actually work - Storage DRS was never designed for thin-provisioned VMDKs.

This is trivial to duplicate:

1. Create a 10GB datastore and put a 20GB thin-provisioned guest on it.  You'll see this works.  Delete the guest

2. Create a storage DRS cluster and put the same datastore in it.  Try to create that thin-provisioned guest again and you'll see that it fails.

This is documented in http://kb.vmware.com/kb/2017605

You can only thin-provision guests if the total sizes of the fully-allocated space doesn't exceed the size of the datastore.  Why you'd thin provision in that situation is beyond me.

This restriction also exists if you de-dupe your datastores.  sDRS doesn't actually consider the free space of the datastores - it looks at the sum of the provisioned size of the VMDKs compared to the size of the datastores.

.../Ed (VCP4, VCP5)

View solution in original post

0 Kudos
10 Replies
memaad
Commander
Commander

Hi,

Storage DRS Balance your storage space and I/O load.

Copy paste from VMware Blog

Storage DRS continuously balances storage space usage and storage I/O load and avoids resource bottlenecks to meet application service levels and increases manageability of storage at scale. Storage DRS allows you to:

  • Easily deploy additional storage capacity and seamlessly take advantage of additional capacity when new storage is added to a pool of storage volumes
  • Maintain datastores in a non-disruptive manner
  • Improve service levels for all applications
  • Increase vSphere administrators' productivity by allowing them to monitor and manage additional infrastructure

http://www.vmware.com/products/datacenter-virtualization/vsphere/storage-drs.html

Regards

Mohammed

Mohammed Emaad |VCP 3, 4,5 |VCP -NV 6 | VCP-DT 51 | vCAP4-DCA | VCAP5DCA | | Mark it as helpful or correct if my suggestion is useful.
0 Kudos
rickardnobel
Champion
Champion

netwrkmgr wrote:

Research indicates there may be issues with space reclaimation. Are there any present issues with VMWare reclaiming space on thin VMs?

With space reclaimation for thin VMs, do you mean space inside the VMDKs that once was used, but now is logically unused from the guest operating system point of view?

My VMware blog: www.rickardnobel.se
0 Kudos
netwrkmgr
Contributor
Contributor

Thanks for the reply. I'm not sure. Some of the research that was done indicates that vMotioning thin vms to different datastores did not release the previous vm blocks and you needed to "UNMAP" from the CLI to fix.

Below is an example of one of the articles:

http://blogs.vmware.com/vsphere/2012/04/vaai-thin-provisioning-block-reclaimunmap-in-action.html

So, the question now is does this happen automatically or is there a manual process to reclaim the space? Will this be applicable to our enviroment?


0 Kudos
rickardnobel
Champion
Champion

netwrkmgr wrote:

So, the question now is does this happen automatically or is there a manual process to reclaim the space? Will this be applicable to our enviroment?

To reclaim space from inside the VMs there are unfortunately no real simple way, however it could be done with some work:

http://rickardnobel.se/reclaim-disk-space-from-thin-provisioned-disks

netwrkmgr wrote:

Some of the research that was done indicates that vMotioning thin vms to different datastores did not release the previous vm blocks and you needed to "UNMAP" from the CLI to fix.

The UNMAP command was available to inform a SAN provisioning "thin" LUNs when VMDKs was removed (deleted or moved through Storage vMotion), but it did not really work as expected and was disabled in an early ESXi 5 patch. Here are some more information and how to verify if it is enabled or not: http://rickardnobel.se/scsi-unmap-vaai-command-removed-in-esxi-5-patch

My VMware blog: www.rickardnobel.se
0 Kudos
netwrkmgr
Contributor
Contributor

Thanks again. So, this is only applicable if the LUNs are thin provisioned from the storage? We are using traditional LUNs (volumes) from the P2000. Does this mean that any thin-provisioned VMs will vMotion properly, without additional manual "cleanup"?

Thanks again for your help, it is greatly appreciated.

0 Kudos
rickardnobel
Champion
Champion

netwrkmgr wrote:

So, this is only applicable if the LUNs are thin provisioned from the storage? We are using traditional LUNs (volumes) from the P2000.

The so called VAAI unmap commands do only apply if the LUNs themselfs are "thin", so if you are using traditional LUNs then this not applicable.

netwrkmgr wrote:

Does this mean that any thin-provisioned VMs will vMotion properly, without additional manual "cleanup"?

Any thin VM will be able to be moved by Storage vMotion without problem, both with or with out SDRS, however the process will not reclaim any unused space inside the VMDK. To be able to shrink the VMDK you must follow the manual and a bit difficult process outlined in the first link.

My VMware blog: www.rickardnobel.se
netwrkmgr
Contributor
Contributor

Thanks. So, it appears that if the used space within the VM is not changing then there is no need to reclaim after a vMotion. It will appear on the new LUN exactly the same as the original VMDK and the original VMDK will be removed from the old LUN freeing up the space. Correct?

0 Kudos
EdWilts
Expert
Expert

Although think provisioning is officially supported, it doesn't actually work - Storage DRS was never designed for thin-provisioned VMDKs.

This is trivial to duplicate:

1. Create a 10GB datastore and put a 20GB thin-provisioned guest on it.  You'll see this works.  Delete the guest

2. Create a storage DRS cluster and put the same datastore in it.  Try to create that thin-provisioned guest again and you'll see that it fails.

This is documented in http://kb.vmware.com/kb/2017605

You can only thin-provision guests if the total sizes of the fully-allocated space doesn't exceed the size of the datastore.  Why you'd thin provision in that situation is beyond me.

This restriction also exists if you de-dupe your datastores.  sDRS doesn't actually consider the free space of the datastores - it looks at the sum of the provisioned size of the VMDKs compared to the size of the datastores.

.../Ed (VCP4, VCP5)

View solution in original post

0 Kudos
intr1nsic
Contributor
Contributor

My question is there an advanced option that will change the algorithm for sdrs to look at the reported total / used space of the datastore via the NFS RPC call vs calculating used space per VM.

With dedupe, you can get ratios high enough that you have say 40% of a volume you can't use in a SDRS pod.

I'm hoping there is an option to say something like:

if (totalVMDKSizes < (datastoreTotal + spaceUtilizationConfig)):

    provisionVM

elif ((datastoreTotal - datastoreUsed) < (configuredUtilization and totalVMDKSize)):

    provisionVM

Basically do what it does now, and fall back to, "well, there is reported free space on one of the datastores per the NFS RPC usage, if the free space here is still less than the reported total space, lets go ahead and still provision a vm" <- Advanced Option you can set.

We are basically running into an issue where we have 1 POD / 2 Datastores with roughly 40% dedupe and 2TB per datastore sitting empty we can't use.

0 Kudos
EdWilts
Expert
Expert
My question is there an advanced option that will change the algorithm for sdrs to look at the reported total / used space of the datastore via the NFS RPC call vs calculating used space per VM.

We are basically running into an issue where we have 1 POD / 2 Datastores with roughly 40% dedupe and 2TB per datastore sitting empty we can't use.

I feel your pain but there's nothing in there today nor is any fix/workaround planned for any 5.x release.  It's rumored to be addressed in 6.x but it requires a redesign of SDRS.  Apparently the developers never envisioned that somebody would use SDRS with either de-duped datastores or thin-provisioned VMDKs.

I've got some volumes with over 80% de-dupe.  I calculated at one point that if I wanted to use SDRS in our environment and not thin provision anything (either controller or guest), I'd have to buy another 80TB of disk space.  SDRS is not worth another quarter-million bucks of enterprise storage.

.../Ed (VCP4, VCP5)
0 Kudos