We are using a disk array that supports thin provisioning and zero-block detection (3PAR T400). When the array detects that a 16KB block has been zeroized it de-allocates the block and returns it to the 'scratch' pool so another LUN can use it. We can manually zeroize blocks within a VM using the Windows sdelete command. This works perfectly, and I've done testing to prove to myself the unused blocks are returned to the array's scratch pool.
However, I don't know of a way to zeroize unused VMFS blocks that aren't occupied by a VM. For example, if I have a 200GB VM that I storage vMotion to another LUN the 200GB of data is not zeroized on the original LUN so the array still thinks those blocks are allocated. I want to reclaim that once previously allocated 200GB of VMFS space.
I could create an eager zeroed VMDK that was sized to use up almost all available datastore space, but that is not as automated as I'd like. I'd like a VMFS command that would just zeroize all unused VMFS blocks.
Ultimately it would be best if VMware used the VAAI extensions to tell the array to zeroize all unused blocks when VMFS space was freed up, say by moving a VM. That way it would be completely automated and transpartent for arrays that support the VAAI extensions and zero block reclaim.
Hello DSeaman,
simples way would be to use the dd comand. For example:
dd if=/dev/zero of=xxx.zero bs=X count=Y ; rm xxx.zero
Note: Xm has to be replaced with (1M,2M,4M or 8M) and Y with a multiple of X, e.g. dd if=/dev/zero of=xxx.zero bs=1M count=254, will create a 254 MB file with zeros.
Note: Don't forget the rm command otherwise the space stays allocated.
A more radical way would be to run the command without the count parameter. That would create a file as big as free space is available. Don't do that on a system with high I/O and make sure that you have a valid backup, only in case something goes wrong.
Again don't forget the rm command. Write both commands in one line as I did above.
Regards.
How would one run that with ESXi?
You will need to use Busybox's console (Tech Support Mode)
~ # dd --help BusyBox v1.9.1-VMware-visor-klnext-2965 (2010-04-19 12:53:48 PDT) multi-call binary Usage: dd [if=FILE] [of=FILE] [ibs=N] [obs=N] [bs=N] [count=N] [skip=N] [seek=N] [conv=notrunc|noerror|sync] Copy a file with converting and formatting Options: if=FILE Read from FILE instead of stdin of=FILE Write to FILE instead of stdout bs=N Read and write N bytes at a time ibs=N Read N bytes at a time obs=N Write N bytes at a time count=N Copy only N input blocks skip=N Skip N input blocks seek=N Skip N output blocks conv=notrunc Don't truncate output file conv=noerror Continue after read errors conv=sync Pad blocks with zeros Numbers may be suffixed by c (x1), w (x2), b (x512), kD (x1000), k (x1024), MD (x1000000), M (x1048576), GD (x1000000000) or G (x1073741824)
=========================================================================
William Lam
VMware vExpert 2009,2010
VMware scripts and resources at:
Getting Started with the vMA (tips/tricks)
Getting Started with the vSphere SDK for Perl
VMware Code Central - Scripts/Sample code for Developers and Administrators
If you find this information useful, please award points for "correct" or "helpful".
I just read the latest Datacore advise on this:
To create and zero out in one sweep, type the
following at the command line:
*vmkfstools -c <size> -d
eagerzeroedthick /vmfs/volumes/<mydir>/<myDisk>.vmdk
*
To zero out an existing disk, type the following at
the command line:
*vmkfstools -w
/vmfs/volumes/<mydir>/<myDisk>.vmdk*
Right, but I'm not looking to zero out a VMDK. I can do that with sdelete within the guest. I want to zeroize all unallocated VMFS blocks in the datastore...ones not used by a VMDK.
By creating the VMDK described above (of the remaining size of your datastore), zeroing it out, and then deleting it, you have effectivly zeroed our the free blocks of the VMFS.
--Matt
VCP, vExpert, Unix Geek, Storage Nerd
I will assume that it's far safer to create an VMDK and zero this disk to free up storage in VMFS, than to run dd command on the VMFS storage directly where thin provisioning VMs might happen to write into while DD is executing on the same sector?
dd wouldn't write in other VMDKs, because it would only write on free blocks.
Regards