VMware Cloud Community
robertendicott2
Contributor
Contributor

vSphere 6.5 VMFS 6 not reclaiming space after snapshot removal

After deleting snapshots or consolidating VMs the datastore does not reclaim data. vCenter version 6.5.0.14100 Build Number 7801515, VMware ESXi 6.5.0, 7388607, VMFS 6. Before removing snapshots and consolidating a VM I had 10.71 TB free on the datastore after the removal I had 9 TB, so during the snapshot removal I have lost over 1 TB of space.  I do have a VMware support ticket open as well but I wanted to see if anyone else has experienced this and if so what is the solution. (I have a vSphere 6.0 VMFS 5 environment that does not have this issue.)

22 Replies
daphnissov
Immortal
Immortal

This may also be something you want to check with your storage vendor. There have been a couple bugs floating around out there when it comes to automatic UNMAP on VMFS-6, and I think most have been fixed, but it's worth a check if there is correlation with your storage microcode.

Reply
0 Kudos
RAJ_RAJ
Expert
Expert

Which is your back end storage  ?

Model and are you using FC / iSCSI / NAS

RAJESH RADHAKRISHNAN VCA -DCV/WM/Cloud,VCP 5 - DCV/DT/CLOUD, ,VCP6-DCV, EMCISA,EMCSA,MCTS,MCPS,BCFA https://ae.linkedin.com/in/rajesh-radhakrishnan-76269335 Mark my post as "helpful" or "correct" if I've helped resolve or answered your query!
Reply
0 Kudos
robertendicott2
Contributor
Contributor

We are using Nimble CS5000 for our storage with iSCSI.  I have opened a ticket with Nimble support but they are pointing to a VMware issue. 

Reply
0 Kudos
robertendicott2
Contributor
Contributor

I documented the space of a VM i just removed a snapshot for.  Here is a before & after

Datastore

Before                                         After

Size   Used     Avail   Use%        Size   Used     Avail    Use%

30TB 10.7TB 19.3TB 36%          30TB 11.1TB 18.9TB 37%

Size of VM directory by using du -h

Before                                         After

13.2GB                                       423.8GB

      

      Before                                                                                                                      After

pastedImage_11.png  pastedImage_12.png

This VMs disk is thin provisioned but after the snapshot removal it seems like this VM has expanded.  This is a 400 GB disk that 7.45 GB is used, so most of the disk is free.  This was a fresh 6.5 environment but the VMs were ova from a vSphere 6.0 environment and still has virtual hardware version 11.

Reply
0 Kudos
ashishsingh1508
Enthusiast
Enthusiast

what is the unmap granularity supported by this device? Currently, ESXi 6.5 don't support unmap processing from devices that are greater than fileblocksize i.e. >1 MB

Ashish Singh VCP-6.5, VCP-NV 6, VCIX-6,VCIX-6.5, vCAP-DCV, vCAP-DCD
Reply
0 Kudos
robertendicott2
Contributor
Contributor

We are using the 1MB fileblocksize.

Reply
0 Kudos
robertendicott2
Contributor
Contributor

I am working with VMware Support on this issue as well.  They told me to to upgrade to ESXi 6.5 U2 and it would resolve the issue.  I did the upgrade and am still having the same issue.  When I get a resolution I will post back.

Reply
0 Kudos
subhashdhyani
Contributor
Contributor

I am having same issue.

have you got solution.

Reply
0 Kudos
SupreetK
Commander
Commander

Subhash - Can you share the below details?

1) What is the ESXi version and build number?

2) Storage array make, model and firmware revision

3) Is auto-unmap not working for VMFS-6 or are you running manual reclamation for VMFS-5?

4) What is the unmap granularity set on the storage array?

Cheers,

Supreet

Reply
0 Kudos
subhashdhyani
Contributor
Contributor

Hi,

I have checked with both below versions.

VMware ESXi 6.5.0 build-7388607 and

VMware ESXi 6.5.0 Update 2 (Build 8935087)

Storage detailes: DELL SC5020 Compellent

Firmware: Current storage firmware is 7.2.31.3

storage auto-unmap not working for VMF VMFS6

VMFS6 with 1MB block size granularity set low (default),  I have checked with high granularity also.

Reply
0 Kudos
SupreetK
Commander
Commander

Looks like the auto-unmap (and manual reclaim as well) is broken in 6.5 GA and Update-1. On the 6.5 U2 host, can you run the below commands for one datastore and share the output?

1) Reset SCSI stats for the LUN in question - <vsish -e set /storage/scsifw/devices/<LUN_ID>/resetVaaiStats 1>

2) Check if the delete stats are reset to zero - <vsish -e get /storage/scsifw/devices/<LUN_ID>/stats | grep -i delete>

3) Initiate a manual unmap (or auto-unmap will be running only when there is a running VM on the datastore in question)

4) After sometime, check the delete stats - <vsish -e get /storage/scsifw/devices/<LUN_ID>/stats | grep -i delete>

If there is a increment in the delete stats, rest assured the unmap is being sent to the storage array. In that case, storage vendor needs to be involved.

Preferably, perform the above steps on a test datastore. If it is working for a test datastore, should be the same for all coming from that storage array.

Cheers,

Supreet

Reply
0 Kudos
sjesse
Leadership
Leadership

I"m using a nimble cs300 in our VDI enviornment and umap seems to be working great, its nice not to have to run it once a month because of all the clones that get created. It does take some time, its not immediate.

Reply
0 Kudos
subhashdhyani
Contributor
Contributor

Hi,

Output

1.    

[root@localhost:~] vsish -e set /storage/scsifw/devices/naa.6000d310056a6600000000000000000b/resetVaaiStats 1

[root@localhost:~]

2.    

[root@localhost:~] vsish -e get /storage/scsifw/devices/naa.6000d310056a6600000000000000000b/stats | grep -i delete

      deleteSuccess:0

      deleteFailure:0

   total delete cmds:0

   total delete failures:0

   total blocks deleted:0

   total unaligned ats, clone, zero, delete ops:0

[root@localhost:~]

There is NO volume under below path.

[root@localhost:~] vsish

/>cd /vmkModules/vmfs3/auto_unmap/volumes/

/vmkModules/vmfs3/auto_unmap/volumes/> ls

Reply
0 Kudos
SupreetK
Commander
Commander

Did you run the manual unmap? Or Is there any powered on VM in the datastore for the auto unmap to run?

Cheers,

Supreet

Reply
0 Kudos
subhashdhyani
Contributor
Contributor

there is powered on VM.

Reply
0 Kudos
SupreetK
Commander
Commander

Can you now run the command <vsish -e get /storage/scsifw/devices/naa.6000d310056a6600000000000000000b/stats | grep -i delete> and share the output?

Cheers,

Supreet

Reply
0 Kudos
subhashdhyani
Contributor
Contributor

Hi,

Output is:

[root@localhost:~] vsish -e get /storage/scsifw/devices/naa.6000d310056a6600000000000000000b/stats | grep -i delete

      deleteSuccess:0

      deleteFailure:0

   total delete cmds:0

   total delete failures:0

   total blocks deleted:0

   total unaligned ats, clone, zero, delete ops:1011

[root@localhost:~]

Reply
0 Kudos
IRIX201110141
Champion
Champion

About Dell SC (Compellent).

The default Blocksize on storage side is 2MB (up to 4MB) which is more than 1MB so the automatic esxi vmfs 6.0 unmap feature cant work. If you have a Dell SC with SSDs only you can setup the array with 512KB blocksize. But even when using this blocksize iam not sure if the feature is working on the SC.

We run the unmap command trough a script from time to time.

Regards,

Joerg

Reply
0 Kudos
subhashdhyani
Contributor
Contributor

Hi Joerg,

Thanks for reply..

Yes we have multiple disk 10K SAS and SSD both.

And Default Disk pool Page size is 2 MB, which is not changeable.

pastedImage_1.png

Reply
0 Kudos