VMware Cloud Community
Dragec
Contributor
Contributor

Reclaim unused space on thin provisioned disk?

Hi all.

I have ESXi host with 300GB space, and 5 Linux guests. All of them have been configured to use thin provisioned disk, and to use max 200GB.

First guest used about 100GB. So I tried to remove temp and some large files. Although I removed around 30GB from it, ESXi still shows that it is using 100GB.

A read on the forum that it isn't possible to reclaim this freed space without VMotion and moving this VM to another datastore.

Is this really true? Is there any tool or procedure to reclaim unused space without downtime?

57 Replies
jfield
Enthusiast
Enthusiast

No, you misunderstand me. I *do* mean the VMFS blocksize. The "dd" command's "bs=" parameter is purely to make the "dd" command run faster. You can remove it with no ill effect.

Reply
0 Kudos
continuum
Immortal
Immortal

Hmm - every experienced ESX user I know uses 8 Mb blocksize for all datastores.


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
jfield
Enthusiast
Enthusiast

Well all that I can say to that, is that I am very glad I don't. Otherwise I would be totally screwed by this bug.

:smileycool:

Reply
0 Kudos
brettcne
Contributor
Contributor

This didn't work for me. Made things worse actually. Disk is Thin Provisioned at 200GB, it showed I was using 160GB, did the dd and then deleted it, now I'm using 200GB and when I storage motion it to anothet datastore, it's still using 200GB. The Linux OS shows I am only using 40GB. Smiley Sad

Reply
0 Kudos
jfield
Enthusiast
Enthusiast

The problem I've seen with Linux is to do with file fragmentation. Blocks get partially used.

If you leave the VM running, use the Standalone Converter to import it into a new VM, that's the only way in Linux to recover all the unused space, as Unix filesystems don't have defragmenters on the whole.

I must admit I've never seen one grow, but I have seen them not shrink.

I would have thought the only way for them to grow is if you have a 4MB block of which only 512 bytes are used, you waste just under 4MB. If you vMotion that to a 2MB blocksize, you waste just under 2MB. But if you vMotion it back to another datastore with a 8MB blocksize, you can theoretically end up wasting just under 8MB. So in that case your total used space will grow.

Jules.

Reply
0 Kudos
hostasaurus
Enthusiast
Enthusiast

I also do exactly what jfield was saying, and ran into the exact same issue.

We have ESXi 4.1 and use 1 MB block size on our VMFS volumes because most of our guests have no need to go beyond 100 GB, and even the unusual ones don't go beyond a few hundred gigs.  They also don't grow in size very quickly.  In any case, sometimes customers will upload huge amounts of data and then delete them, so I found a bunch of our guests wasting a lot of room.  I used dd to fill the guest disk with zeroes, migrated from thin to thick and back, one LUN to another and back on our EMC, no change.  I created a new LUN and attached our ESXi hosts to it, formatted it as a 2 MB VMFS.  Did the exact same migration, thick to thin, and it recovered the space.  Now we keep the 2 MB LUN around just for moving guests to and from when we need to shrink them.

So yeah, like jfield said, if you're using 8 MB out of necessity for vmdk's larger than the lower sizes support, you're screwed if you need to recover that space without a long outage and a lot of pain.  Or get a more advanced storage system that can dedupe as an ugly workaround so you can thin provision below the vmware level.

Reply
0 Kudos
zachlod
Contributor
Contributor

Hi!

I just wanted to acknowledge that jfield is right. I don't know if this is a bug or not but if you try to shrink a think provisioned disk this simply does not work as long as you storage vm it between vmfs' with the identical block size. I just tried that and it did not reclaim any space. Afterwards I just generated another datastore and used a different block size fot that. And now as soon as you storage vm a machine to this storage which had been zeroed out in before, the vmdk shrinks.

regards

Reply
0 Kudos
Oczkov
Enthusiast
Enthusiast

Hi Guys,

What you probably need to do first is to write zeros to the freespace on each of the Linux filesystems to make VMware or storage thin provisioning mechanism think the blocks were not touched and contain no data (all zeros).

1. Classic approach

I think, you might want to look at:

dd utility method decribed here:

http://www.michaelcole.com/node/13

the command you need is this one:

sudo dd if=/dev/zero of=/zerofile; sudo rm /zerofile

It created a file filled with zeros to make the filesystem full (out of space) and then deletes the file, resulting in a free space zeroed for you. Repeat for every filesystem.

2. Scrub

Please also look at the scrub utility (probably with -X option) that is available on Linux:

Man pages and some examples:

http://linux.die.net/man/1/scrub

http://www.bgevolution.com/blog/scrub-file-shredding-for-linux/

3. Zerofree (Debian/Ubuntu)

Have also a try with zerofree utility that should be availalbe on Debian and Ubuntu based distros. Beware - it is slow.

http://manpages.ubuntu.com/manpages/natty/man8/zerofree.8.html

http://community.linuxmint.com/software/view/zerofree

http://maketecheasier.com/shrink-your-virtualbox-vm/2009/04/06

4. Shred

You may also want to look at shred command, but it may not be what you really needed.

http://linux.die.net/man/1/shred

All of these are not as straightforward as sdelete (-c) used on Windows, but you should be able to make the first step.

The second step should be trying to convince the thin provisioning mechanism to compact the zeros. EMC's block compression available on CX4/VNX storage arrays would surely do it when you enable compression on a thick or thin LUN, squeezing all zeros out of it.

Regards,

oczkov

Reply
0 Kudos
srikanthraavi
Enthusiast
Enthusiast

Dear all,

          Datastore space issue....Smiley Sad

          We have 2TB datastore thin provisioned and we were created 4 linux virtual machines top of that, so i trying to move(svmotion) one virtual machine to another datastore  but still we are facing space issue on the datastore........:smileycry:

     Please help me...Smiley Happy

Thanks

Srikanth Raavi

Reply
0 Kudos
hostasaurus
Enthusiast
Enthusiast

You need to zero the freespace in the guest filesystems, then move to a filesystem with a different block size when going from thin to thick and back or it won't recover the space.

Reply
0 Kudos
lklein777
Contributor
Contributor

This seems like such an issue with vmware. I have read through this thread and is the community basically saying that in order to free-up space after you have deleted files within windows or linux you have to zero out the free space and either do a vmkfstools copy or vmotion the vmdk?

Am I the only one that sees the strange amount of work to just free up space in vmware? A simple windows delete of the recycle bin to free up space has become this large operation, where the server has to be shutdown or vmotioned to free up space.

Is there no other simpler way (like a command or something like a windows recycle bin delete) to free up space with thin disks?

Reply
0 Kudos
hostasaurus
Enthusiast
Enthusiast

Yup, unfortunately that's the only way.  The zero'ing the space does make sense so I'm not holding that against them.  The underlying operating system (ESXi) would have no way of knowing what is in use or not in use within the guest's filesystem in the VMDK, so setting all the free space to zero is the only way to give it an indicator it can then see to be able to shrink.  However, we're about 4+ years over due for a way to do this that doesn't involve vmotion'ing the guest to a VMFS of a different block size, or doing an offline operation; that's simply ridiculous, especially in large environments.

Maybe storage vendors are paying them to not implement the feature lol.

Reply
0 Kudos
lklein777
Contributor
Contributor

yeah way more than 4 years on such a simple task. this is something that I'm sure the devs could have found a solution for.

We recently filled up a 1tb vm that we use to put temporary files and other "useful" IT data. We removed the data but the disk was thin provisioned at 1tb. So vmware see's it using that amount of space. So now our free space on the local datastore is at around 236GB. This leaves me no option for a local zero free space and copy over to the local datastore due to going over that 236gb size. So we will have to zero the space and essentially hookup another NFS datastore on external storage to vmdkfs -i vmname.vmdk -d thin newvmdk.vmdk the hard disk file. Then move the original one over to the NFS datastore and copy back the newly "zeroed out" vmdk to the local datastore.

This is so f******ing involved for just using up free space.

Reply
0 Kudos
hostasaurus
Enthusiast
Enthusiast

Not that it will be any easier, or less annoying, but if you'd like to avoid taking your machine offline to run the vmdkfs stuff, you could set up a linux box as an iscsi server, hook your vmware host to it and format it as VMFS v3 with a different block size than your production disk, then storage vMotion the VM to it and back; that will shrink it.  Like I said, just as annoying and tedius, but it avoids an outage if you need to.

Reply
0 Kudos
toelie888
Contributor
Contributor

I am not able to punch zeros on a vmdk that is residing on a NFS datastore. Punching zeros on vmdks on ISCSI and direct attached datastores works fine though.

Can anybody confirm that this is normal behavior? Can not find the answer using search engines.

Reply
0 Kudos
Oczkov
Enthusiast
Enthusiast

Hi,

As for NFS datastores the story is a little bit different. Please look at the KB1005418, VMware KB: Using thin provisioned disks with virtual machines:

  • With NFS datastores, the provisioned disk format will be thin provisioned by default which cannot be changed. With vSphere 5.0, you can specify the provisioned format. For example, you can specify thick provisioning if the storage array or filer supports it via VAAI.



You may also consider reading these articles:


https://communities.vmware.com/thread/447915

https://communities.vmware.com/message/2246194

http://miketrellosblog.arcadecab.com/2010/09/compressing-windows-nfs-share-to-simulate-esxi-thin-pro...

https://communities.netapp.com/message/69231


Best regards,


Reply
0 Kudos
toelie888
Contributor
Contributor

Thank you for the fast response. I read your articles

I know that NFS datastores as of esxi 5.0 automatically have thin disks. But I can punch zeros on (vmkfstools -K someserverdisk.vmdk) on thin disks that reside on ISCSI datastores and direct attached datastores. The same command does nothing for thin disks on NFS datastores, not even an error.

So I have to storage vmotion the offline server to ISCSI datastore; Start the server; Then write zeros on unused space; Offline the server; Punch holes; Storage vmotion the server back to NFS datastore and finally start the virtual server.

This takes a lot of time. I only have vsphere 5 essentials so live storage vmotion is not an option for me.


So can anybody confirm that punching zeros (vmkfstools -K someserverdisk.vmdk) is not working on NFS datastores at all?


Thanks in advance

Reply
0 Kudos
briggsb
Contributor
Contributor

toelie888, sorry, I can't help you out with an answer to your question just yet, but I wondered if you could elaborate on things for the benefit of my simple mind?

I too have the same setup, with Essentials, so can't do live storage vMotion. I also have production servers on ISCSI datastores, and some other servers on "cheap" NAS boxes, that are NFS stores. So I could test your theory, but I just need to know the steps in a bit more detail. What do you mean by "punching zeros"? What are you doing in detail to achieve this?

I need to do this very soon, so I can let you know my findings, but please educate me Smiley Happy

Many thanks, Alan

Reply
0 Kudos
hostasaurus
Enthusiast
Enthusiast

I've never heard that expression he used but he gave the command he used to accomplish it with the offline vmdk: vmkfstools -K someserverdisk.vmdk

Reply
0 Kudos
Oczkov
Enthusiast
Enthusiast

VMware also uses the term of "punching zeros". Please look here:

Removing Zeroed Blocks

Removing Zeroed Blocks

Use the vmkfstools command to convert any thin, zeroedthick, or eagerzeroedthick virtual disk to a thin disk with zeroed blocks removed.

-K --punchzero

This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format.

Reply
0 Kudos