VMware Cloud Community
VirtualMicky
Contributor
Contributor

space reclamarion vsphere 6.5

Hi

my enviroment is vcenter  6.5 with esxi 6.5

i have a virtual machine with 1 TB of thin virtual disk. I have create a vmf6 datastore with reclamation Low active. I migrate my virtual machine in this datastore , internaly my disk space occupied it is a 200 Gb , but the space are not reclaimated.

how many time i'm waiting fr free the space? there is a manual procedure to force this reclamation?

Thanks for all

0 Kudos
6 Replies
vembutech1
Hot Shot
Hot Shot

In 6.5 version with vmfs6 datastore, reclaim happens automatically by default. Do check Reclamation priority is set to low .

Run the command

esxcli storage vmfs reclaim config get -l datastorename

and verfiy priority is set to low, if not change it to low or use the below command to set it low

esxcli storage vmfs reclaim config set -l datastorename -p low

for doing it manually on 6.0 and below version, try once and check you reclaimed the space.

esxcli storage vmfs unmap -l DatastoreName

VirtualMicky
Contributor
Contributor

thanks

i have checked the priority , i have select low in web gui datastore propriety
other solution please

0 Kudos
vembutech1
Hot Shot
Hot Shot

What OS running on the VM. Provide most details of OS

0 Kudos
VirtualMicky
Contributor
Contributor

Centos 7

i fill the disk until 900 Gb and then i remove the data until 200 Gb , but the datastore are not free the space

0 Kudos
vembutech1
Hot Shot
Hot Shot

There might be issue on releasing free space on linux file system. Use sdelete and try to do SvMotion once.

https://blah.cloud/infrastructure/zero-free-space-using-sdelete-shrink-thin-provisioned-vmdk/

0 Kudos
Eric_Allione
Enthusiast
Enthusiast

I have found with thin provisioning that if the guest OS has a virtual disk that goes up to 1TB but (according to the virtual Windows) it's only taking 100GB of that virtual disk, that vSphere still shows that it's using 1TB - and a little bit even more than that for padding.

What this shows here is how the datastore can be oversubscribed far beyond its own capacity. In that scenario, everything would still be working even if the datastore's physical limits are only 500GB (because the guest OS is only taking 100GB even if it the guest OS thinks it can store up to 1TB. It's ok if the guest OS thinks that even though in reality it cannot.

As a result, I have seen many examples of working datastores which say something like they are using 30TB/18TB. By saying that, it's adding up all the total capacities. Maybe that's what's going on?

0 Kudos