VMware Cloud Community
BillClarkCNB
Enthusiast
Enthusiast

Thin-provisioned disks reporting different sizes in Windows and vmWare

Have a WIndows 2008 R2 server running with multiple virtual disks.  2 of these disks appear to be very over-provisioned in Windows and we'd like to reduce them and reclaim the space, but I'm getting a different size reported by VMware.  Here's what I have:

Disk1:

As reported by vSphere, capacity 500GB, thin-provisioned

As reported by Windows, 500GB (406GB free space)

Browsing the datastore through vSphere, virtual disk size of 470GB

Disk2:

As reported by vSphere, capacity 1.5TB, thin-provisioned

As reported by Windows, 1TB (858GB free space) The "missing" 500GB is not applied to the Windows partition for some reason.

Browsing the datastore through vSphere, virtual disk size of 296GB

There are no current snapshots of this server(shown in vSphere or appearing in the datastore), so I'm very confused as to where the additional space that the datastore shows is coming from.  More importantly, how can I reclaim this unused space?  Is it possible to do a thin-to-thin copy using vmkfstools?

Reply
0 Kudos
7 Replies
BillClarkCNB
Enthusiast
Enthusiast

OK, so after thinking this through a bit, I'm assuming the 470GB being shown as used space on the datastore for Disk1 is because at some point in time, that thin-provisioned disk actually used that amount of space before files were moved/deleted in Windows.  This would explain the discrepancy for Disk 2 also.  I forgot that for thin-provisioned disks they can grow to a certain size, but they don't shrink automatically when that space is no longer used.

So, to the big question, how can I reclaim that space?  Thanks!

Reply
0 Kudos
compwizpro
Enthusiast
Enthusiast

Your suspicion is probably correct about windows consuming that space at some point.  Also, windows server 2008 does not have the ability to naively reclaim from the VMDK when you delete data from the disk.  Once data is written to the thin disk, it will remain even with the data gets deleted within the guest.  Additionally, NTFS is not very thin friendly meaning that even though data was deleted within the guest, it could still write to free or unwritten blocks not knowing the underlying VMDK is thin causing the VMDK to continue to bloat with new writes potentially causing it to inflate close to it's maximum size.

One option to reclaim this space is to use use a free tool such as sdelete to write zeros to the "free" space.  This might inflate the thin vmdk but you can use either vmware converter or vmkfstools to shrink the vmdk.  A guide for that can be found here: https://vswitchzero.com/2018/02/19/using-sdelete-and-vmkfstools-to-reclaim-thin-vmdk-space/

The best option is to ugrade the OS to server 2012 or later and VM hardware 11 or later to allow automatic in-guest unamp which will allow the deletion of data in the guest to shrink the vmdk automatically.

Hope that helps.

Reply
0 Kudos
BillClarkCNB
Enthusiast
Enthusiast

@compwizpro - So I went through the process that you linked to, the zeroing & hole punching in a test environment and everything looked good.  So I applied those same steps to Disk1 and the results aren't what I expected.  Running the "DU" command prior to it all, that VMDK was showing 470GB.  After zeroing out the Windows drive space, then running the "hole punch" process, I expected to see about 400GB in space available on this particular drive.  When I re-ran the "DU" command, it now shows the VMDK as 360GB in size.  I expected that number to be close to 100GB, matching what is being used on the OS disk.  There are no snapshots on this server, and looking at the files either through the datastore browser or the console, I don't see any extra disks or files that should be causing this discrepancy.  Any ideas?

Reply
0 Kudos
compwizpro
Enthusiast
Enthusiast

In your test environment was that a fresh empty disk initially you put test files on and then deleted?  One thing you might need to do after deleting the files and before zeroing is running a defrag.  There might be files spaced throughout the disk and when shrinking the vmdk, it might only shrink to the last file on the disk which could be towards the end of the disk even though the middle part is empty.

Running the defrag should compact everything as much as possible to the front of the disk, then zero everything out and attempt to shrink it again.  This should yield better results.  If it's a system drive, you might need to run an offline defrag if there are system files that are unmovable towards the end of the drive.

Let me know if that helps.

Reply
0 Kudos
BillClarkCNB
Enthusiast
Enthusiast

I ran a CHKDSK against the drive in question and during that time I also discovered that a long, long time ago, Shadow Copies was configured.  I was able to manually delete any remnants of that, and the CHKDSK finished with no errors.  SO I re-ran the whole process of zeroing out the drive, then hole-punching and nothing changed.  The DU command STILL shows that drive as holding onto 360GB, yet in the Windows world, it only shows 106GB being used and I can't seem to find where the other 200+ GB is coming from.  As a last ditch effort, I'm running defrag against the drive and will try the process one more time then I have to cut my losses and move on.

Reply
0 Kudos
compwizpro
Enthusiast
Enthusiast

If you can, save the defrag report to see what the before and after looks like and compare that to your hole-punch results.

If you really want to shrink the drive, you can always resort to VMware converter but that is an offline operation.

Reply
0 Kudos
ThompsG
Virtuoso
Virtuoso

Hi BillClarkCNB,

Try running sdelete (SysInternals utility) inside the guest:

sdelete -z <drive_letter> # or sdelete64 if running a 64-bit OS

This will write zeros over the free space blocks on the drive. After this you can do the “punch zero” from ESXi and the VMDK should shrink.

Kind regards.

Reply
0 Kudos