This seems like such an issue with vmware. I have read through this thread and is the community basically saying that in order to free-up space after you have deleted files within windows or linux you have to zero out the free space and either do a vmkfstools copy or vmotion the vmdk?
Am I the only one that sees the strange amount of work to just free up space in vmware? A simple windows delete of the recycle bin to free up space has become this large operation, where the server has to be shutdown or vmotioned to free up space.
Is there no other simpler way (like a command or something like a windows recycle bin delete) to free up space with thin disks?
Yup, unfortunately that's the only way. The zero'ing the space does make sense so I'm not holding that against them. The underlying operating system (ESXi) would have no way of knowing what is in use or not in use within the guest's filesystem in the VMDK, so setting all the free space to zero is the only way to give it an indicator it can then see to be able to shrink. However, we're about 4+ years over due for a way to do this that doesn't involve vmotion'ing the guest to a VMFS of a different block size, or doing an offline operation; that's simply ridiculous, especially in large environments.
Maybe storage vendors are paying them to not implement the feature lol.
yeah way more than 4 years on such a simple task. this is something that I'm sure the devs could have found a solution for.
We recently filled up a 1tb vm that we use to put temporary files and other "useful" IT data. We removed the data but the disk was thin provisioned at 1tb. So vmware see's it using that amount of space. So now our free space on the local datastore is at around 236GB. This leaves me no option for a local zero free space and copy over to the local datastore due to going over that 236gb size. So we will have to zero the space and essentially hookup another NFS datastore on external storage to vmdkfs -i vmname.vmdk -d thin newvmdk.vmdk the hard disk file. Then move the original one over to the NFS datastore and copy back the newly "zeroed out" vmdk to the local datastore.
This is so f******ing involved for just using up free space.
Not that it will be any easier, or less annoying, but if you'd like to avoid taking your machine offline to run the vmdkfs stuff, you could set up a linux box as an iscsi server, hook your vmware host to it and format it as VMFS v3 with a different block size than your production disk, then storage vMotion the VM to it and back; that will shrink it. Like I said, just as annoying and tedius, but it avoids an outage if you need to.
I am not able to punch zeros on a vmdk that is residing on a NFS datastore. Punching zeros on vmdks on ISCSI and direct attached datastores works fine though.
Can anybody confirm that this is normal behavior? Can not find the answer using search engines.
As for NFS datastores the story is a little bit different. Please look at the KB1005418, VMware KB: Using thin provisioned disks with virtual machines:
- With NFS datastores, the provisioned disk format will be thin provisioned by default which cannot be changed. With vSphere 5.0, you can specify the provisioned format. For example, you can specify thick provisioning if the storage array or filer supports it via VAAI.
You may also consider reading these articles:
Thank you for the fast response. I read your articles
I know that NFS datastores as of esxi 5.0 automatically have thin disks. But I can punch zeros on (vmkfstools -K someserverdisk.vmdk) on thin disks that reside on ISCSI datastores and direct attached datastores. The same command does nothing for thin disks on NFS datastores, not even an error.
So I have to storage vmotion the offline server to ISCSI datastore; Start the server; Then write zeros on unused space; Offline the server; Punch holes; Storage vmotion the server back to NFS datastore and finally start the virtual server.
This takes a lot of time. I only have vsphere 5 essentials so live storage vmotion is not an option for me.
So can anybody confirm that punching zeros (vmkfstools -K someserverdisk.vmdk) is not working on NFS datastores at all?
Thanks in advance
toelie888, sorry, I can't help you out with an answer to your question just yet, but I wondered if you could elaborate on things for the benefit of my simple mind?
I too have the same setup, with Essentials, so can't do live storage vMotion. I also have production servers on ISCSI datastores, and some other servers on "cheap" NAS boxes, that are NFS stores. So I could test your theory, but I just need to know the steps in a bit more detail. What do you mean by "punching zeros"? What are you doing in detail to achieve this?
I need to do this very soon, so I can let you know my findings, but please educate me
Many thanks, Alan
I've never heard that expression he used but he gave the command he used to accomplish it with the offline vmdk: vmkfstools -K someserverdisk.vmdk
VMware also uses the term of "punching zeros". Please look here:
Removing Zeroed Blocks
Use the vmkfstools command to convert any thin, zeroedthick, or eagerzeroedthick virtual disk to a thin disk with zeroed blocks removed.
Enable SSH on the esxi server in vpshere client (Configuration -> Security Profile -> Service Properties -> SSH options -> start)
Start a SSH client (putty is a good one for windows) and connect to your ESXi ip-address with root username and your password.
Go to a server with thin disks on your NFS datastore (cd /vmfs/volumes/[your NFS datastorename]/[your servername])
Make sure the server is powered off and has no snapshots. Punch some holes (vmkfstools -K [your diskname].vmdk)
Nothing happens on both my NFS datastores. When i do this on my ISCSI datastore I see a progress counter and the disk actually gets smaller.
Thanks for the prompt reply! OK I can confirm exactly what you are finding - unfortunately! So it looks like it is normal behaviour, which is a disappointment! A bigger disappointment is the fact that this is not a routine that's easily carried out in VMware tools or something, surely it's a common request for anyone using thin disks.
It certainly is a common request. Only logic I can see behind them doing nothing for years and years now is the fact that EMC bought them; maybe they want people to waste tons of disk space so they buy more storage. We decided to task one of our staff with zeroing and live migrating forward and back bloated VM's every few months and it never fails to recover several terabytes of wasted space on our arrays.
That is one of the few nice things about the horrendous new 5.5 interface; on the Home -> vCenter -> Virtual Machines screen, it gives you a chart of all of your machines and two of the columns are provisioned space and used space, and you can sort it by them. Makes it very easy to spot the bloated guests to target for cleanup.
It's a lot less work when you have live storage migration
@briggsb : Did you also confirm that it does work on your ISCSI datastore?
It did indeed run from the iSCSI datastore. Though to be fair, I didn't wait for it to finish to see the results, it was a test DS...