Evening all
We recently P2V'd 4 Dell servers which all went OK.
When it came to thin provisioning the storage, we've hit a slight snag. No matter what we do, we can't claw back any space through thin provisioning.
For example, we have one server, 62GB C drive. We've performed the following:
Defragged
Chkdsk'd
Sdelete -c (and even -z followed by another -c)
Then proceeded to migrate to another datastore as thin provisioned. We appear to get 0 space back even though Windows reports there being 31.6GB free disk space.
Don't get me wrong, in the past doing the 4 above steps has always clawed back the free space but just this time round on all 4 servers we can't reclaim anything.
I've even converted back to thick and then to thin in case something was being funny.
Using ESX 4 fully patched across all hosts.The only thing I can think is that when we've thin provisioned in the past, we were running ESX 4 with no patches and we've recently updated everything to the current release but I can't really believe a patch/update would have broken something so fundamental.
Any idea's?
Thanks for investigating that for us Duncan! I'm not sure if it's such a big problem now we've found a workaround and I was assured by VMWare suppor that it would be officailly implemented into ESX v5.
Or you can use NFS.. move it to nfs, then bring it back.
(See post in the other thread.. http://communities.vmware.com/message/1811673)
If using vSphere, make sure to select the files directly in the datastore browser!
If you copy or move the whole folder, ESXi will make the vmdk thick at the destination!
Was this in fact implemented in ESXi 5.0?
No it wasn't and I believe they've made the problem even worse with vSphere 5 as VMFS 5 only supports 1MB block sizes (unless you upgrade to VMFS from VMFS3, then it keeps the block sizes, but you don't get all the benefits of VMFS5) so you cannot even move a thin provisioned VM to another datastore to free up the space. They have implemented "UNMAP" which free's up disk on thin provisioned LUN's on the SAN array end, but not within the actual VMDK.
Bit silly really, thought it would have been a good feature to promote!
So in vSphere 5 is there any way at all of thinning out VMDKs?
If they've removed it completely, that's a total nightmare.
I guess it will be back to the "create a new VM and use converter to re-convert it into the new VM" method? A royal PITA if you ask me.
We'll just have to be careful and make sure people don't fill up our thin VMDK's :smileygrin:
Yuck. That takes ages, involves a lot of work and requires the VM to be taken out of service. Not good enough.
Agreed jfield!
Like most people using ESXi 5.0, we're using a Datastore Cluster comprised of VMFS5 volumes (which, as Ollie pointed out, are all 1MB block size).
This morning I tested the following:
All that being said, it only confirms that you can still achieve space reclamation with the process originally provided here, but w ESXi 5.0; given that you have some temp space and a lot of patience. In some instances, for VMs that you can bring down, converter would probably be faster... but the steps listed above would facilitate space reclamation for VMs which downtime isn't an option.
On a final note: The "UNMAP" feature they implemented for thin provisioned LUNs on the SAN side has been known to cause issues, and as such should be used with extreme caution (though both VMware and EMC recommend disabling it until a fix is found).
Here's to hoping VMware gets this sorted out!
-dew
Many thanks for producing a solution. We'll make sure we keep a VMFS3 datastore with a non-1MB block size.
Cheers!