All,
Here's a quick rundown on my environment:
I'd noticed that the backups of one of my VM's was a lot larger than the actual data that was on the disk. In investigating, I also found that while thin provisioned, the disk was also showing that the "used storage" for this VM was almost the same as the provisioned storage. I did some hunting around and I found basically the following 3 statements in a lot of the stuff I'd seen:
Well, I've taken the time to do this. And now my symantec backups have shrunk as expected. Inside windows I show ~27.92 GB of data, the Symantec backup now shows it backed up ~28.10GB of data (transport mode nbd). But, in VMware I still show 154.01 GB of provisioned storage, with 153.92 GB of used storage on this virtual machine. This is after several iterations of moving it between a 2TB LUN with a 4MB block size on the EQL SAN and a 1.07 TB LUN with a 2MB block size on the EMC SAN with storage VMotion.
Is there possibly something I'm missing (or misunderstanding).. or maybe something has changed since the articles I was reading were written (around Jan/Feb of 2011). I somewhat expected the used storage to drop back down to around 28GB or so after doing this.
Thanks,
Clif Godfrey
Let me save you some time.. you can NOT reclaim space, period.
Either at the VM or LUN level.
Only way to do is is convert the VM to a NEW VM delete the old one.. that should strip out the zeroed or unused data.
For LUN's it means creating NEW LUN's and moving VM's to those new LUNS. VMFS can RE-USE space.. but not reclaim it as 'FREE'
Parker isn't quite right - it is possible, but its very array dependent, and you need to take some special action (until vSphere 5's UNMAP support works):
On an EMC array, for example, you can svmotion to a different LUN, but you then have to overwrite that original space with zeroes. That can be done by creating a large eagerzeroedthick vmdk (I generally do like 90% of the free space on the VMFS), then quickly delete it. This will allow the array to recognize the zeroes and reclaim the space.
Once UNMAP works, you wont need to do any of this, and it will be transparent and automatic.
Matt wrote:
Parker isn't quite right - it is possible, but its very array dependent, and you need to take some special action (until vSphere 5's UNMAP support works):
On an EMC array, for example, you can svmotion to a different LUN, but you then have to overwrite that original space with zeroes. That can be done by creating a large eagerzeroedthick vmdk (I generally do like 90% of the free space on the VMFS), then quickly delete it. This will allow the array to recognize the zeroes and reclaim the space.
Once UNMAP works, you wont need to do any of this, and it will be transparent and automatic.
Thanks.
for the majority of SAN you can't reclaim space.. Especially from a LUN. You still can't reclaim space from a VMFS datastore..
And it's not a complete "unmap"
http://www.yellow-bricks.com/2011/07/15/vsphere-5-0-unmap-vaai-feature/
Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!
Parker,
Thanks for the reply. Just seemed I'd seen lots of articles like the one below that seemed to say "it should work".
http://www.thelowercasew.com/reclaiming-disk-space-with-storage-vmotion-and-thin-provisioning
I knew about the issue recovering the SAN space. For that I was just going to create a new lun, then storage vmotion everything over on the weekend. I'm guessing either the articles were innacurate, or a futher patch somehow changed the behavior. Not that big of a deal to me, just wanted to see if I could clean up some space w/o taking an outage.
Clif
Thanks.
for the majority of SAN you can't reclaim space.. Especially from a LUN. You still can't reclaim space from a VMFS datastore..
And it's not a complete "unmap"
http://www.yellow-bricks.com/2011/07/15/vsphere-5-0-unmap-vaai-feature/
Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!
All the decent ones can Hitachi, 3PAR, EMC can all do this
Correct - doing a straight 'rm' wont cause it to happen, but doing regular events (like a API delete or a svmotion) WILL cause the UNMAP code to run.