I have an ESXi datastore that is on a 2TB SSD. In reality, it provides 1.82TB of available space. It contains a single VM with a 2TB virtual disk, brought over from a vSAN datastore (which had much more than 2TB available space). It is a thin disk and the OS is only using 720GB of space, so this VM worked fine of course on the new datastore despite being over provisioned.
Problem is.... after freeing up a lot of space within the guest OS, I wanted to zero out the free space and compact the thin vmdk. But when I ran the dd command to zero free space, it thinking the available space is 2TB, ran all the way up to 1.82TB for the vmdk, which filled the datastore 100% and froze the VM up since no space was left on datastore for cache, nvram, etc... I guess.
I thought, no problem, I will just run "vmkfstools -K" (the punch zero command) with the VM powered off and it will shrink the vmdk by the space already zeroed out, freeing up space on datastore again. Problem is.... after running the hole punch command, and it runs to 100% over the course of an hour.... the vmdk is still 1.82TB and the datastore is still 100% full. I used "du" command to check size of vmdk, not "ls -l" and before running vmkfstools -K I deleted the "zeroes" file from /tmp on the VM using a live CD.
Any ideas why despite deleting the zeroes file and hole punching, it isn't removing zeroed out space? How can I solve this to get my space back and fix the datastore being full problem?
The virtual disk is encrypted within Ubuntu with LUKS using LVM volume. Could this be why ESXi can't hole punch zeroes? Because it doesn't "see" any? I couldn't find anything on vmware site or 3rd party sites mentioning if encrypted disks work with this process or not. Is this is why, any way to get around it?
Needless to say, after this is resolved, I'll be converting to a smaller thick disk to avoid issues again in future.
EDIT: I also thought maybe it wasn't properly zeroing out in the guest OS because once it got full it locked up, so using a live CD I tried creating only a 50GB zero file on the guest VM vmdk, that was then removed, and hole punch ran again, trying to buy 50GB of space at least, but it didn't work, vmdk stays 1.82TB after hole punch.
I can confirm very similar behaviour.
64 GB thin provisioned virtual harddrive
63.48 GB alocated on DS
From the virtual machine I can see:
68GB total / 22GB free.
I ran "sdelete -z", shut down vm and then vmkfstools -K /path to .vmdk
It didn't work on the first try, I had to try again (3-4 times), error "Could not punch hole in disk : Device or resource busy"
Finally, it ran to 100%, but no space was freed up.
On DS there is still 63.19GB allocated.
This has always worked quite reliably, but now it just won't free up any space.
Previously, in version 5.5, it was possible to have one DS with different block sizes and free space just by moving the vm back and forth. Nowadays, there seems to be no clear way to free space from a thin allocated disk.
VMware ESXi, 7.0.3, 20842708
The same crash problem "vmkfstools -K disk.vmdk" with random increasing percent 4%,6%,7%,20%,21%. No more maintenance window to continue. Disk was partially punched (609->606->579->576GB reported with "du -k disk-flat.vmdk" ("true" use is less than 120GB)).
Could not punch hole in disk 'disk.vmdk': Device or resource busy[....]
Does anyone have reaction from VMW ?
VMware ESXi, 7.0.3, 20842708