VMware Cloud Community
JDMils_Interact
Enthusiast
Enthusiast

Converting virtual disks from Thick provisioned to Thin provisioned not resulting in any disk saving

I am trying to migrate a 24TB Windows 2012 virtual machine from old NetApp storage to new NMS storage. I used the method of migrating the server's storage only, and migrating one disk at a time, changing the disk format from Thick Eager Zero to Thin.

From what I can see, the disks are showing as Thin provisioned however the physical disk sizes on the NMS datastore seem to be the full disk size not the "thinned" disk size.

This is the source server:

Disk1: 80GB
Disk2: 9TB
Disk3: 2.2TB
Disk4: 10TB
Disk5: 100GB
Disk6: 2TB

vCenter web GUI shows the server storage used is 22.8TB.

After migrating the disks to the NMS storage, I thought I might migrate Disk5 to a new datastore, in case vCenter shrinks the disk after the thinning- this is what i see on the storage:

[root@host06:/vmfs/volumes/4686642d-b436c247/DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)] ls -lisah
total 67365872
1651711928 4 drwxr-xr-x 2 root root 4.0K Dec 14 23:13 .
1651703872 4 drwxr-xr-x 9 root root 4.0K Dec 14 23:10 ..
1651711931 4 -rwxrwxr-x 1 root root 92 Dec 14 23:29 .lck-b91f736200000000
1651711932 6436 -rw------- 1 root root 6.3M Dec 14 23:13 DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)_2-ctk.vmdk
1651711929 67359420 -rw------- 1 root root 100.0G Dec 14 23:13 DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)_2-flat.vmdk
1651711930 4 -rw------- 1 root root 705 Dec 14 23:13 DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)_2.vmdk
[root@host06:/vmfs/volumes/4686642d-b436c247/DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)]

The virtual machine is in vCloud Director as well.

The only way I could do this in the past was to use VMware Converter but I tried this method on the above server and VMware Converter never completes the conversion as it crashes with a strange VSS error and VMware Support told me it's no longer a supported product so they could not help.

The only way I can see the disks thinning out is to try this method: https://kb.vmware.com/s/article/2136514, however I can't power this server off as it runs critical systems.

Is there any other way to fix this?

0 Kudos
7 Replies
sjesse
Leadership
Leadership

I can't open your kb, but this one I think says the same thing

 

https://kb.vmware.com/s/article/2004155

 

it requires you to shut the VM down, there really isn't another way I'm aware of at the moment.

0 Kudos
kastlr
Expert
Expert

Hi,

 

based on your post it looks like you're using block devices instead of NFS exports.

As you're migrating between two different arrays the classic svMotion process would be used (no VAAI).

If your old array didn't support Trim & Unmap initiated on the Guest OS layer it's possible that all tracks assigned to the vmdk still contain data.

svMotion than has to copy even unused data to the new array, simply because only the Guest OS does know which data is stale.

I assume that the new array will fully support VAAI Zero or Guest OS Trim & Unmap commands, so you have to inform it which data is still needed and which is stale.

Check out the following article which does contain a section on how to handle Windows Guest VMs.

Automated Space Reclamation

Ignore the fact that the article is for vSAN, the Microsoft section describes the tasks inside a VM and they are totally independent from the used storage array.

Regards,

Ralf

 


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
a_p_
Leadership
Leadership

Please don't mind me asking, but I don't see your issue. Can you please clarify?

The output

1651711929 67359420 -rw------- 1 root root 100.0G Dec 14 23:13 DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)_2-flat.vmdk

shows that the virtual disk is indeed thin provisioned, and consumes ~64GB on the datastore. The reason why it shows the 100GB provisioned size is that thin provisioning is a file system feature, i.e. not file feature.

André

0 Kudos
JDMils_Interact
Enthusiast
Enthusiast

Hi A_P_,

Sorry, bad example. That is one of the disks, and it indeed looks like it's working as a thin-provisioned disk so let's look at another of those disks:

1651781474 9463307940 -rw------- 1 root root 8.8T Dec 15 05:42 DFS01 (1665afe7-45a7-46c1-93e9-xxxxxxxxxx)_1-flat.vmdk

So the disk shows 9463307940 Kbytes or 8.8133TB - and this is a thin-provisioned disk!

0 Kudos
JDMils_Interact
Enthusiast
Enthusiast

Looks like an updated version of the article I posted, and interestingly, it states that if I storage vMotion the virtual to a datastore with a different block size, the disks will shrink!

I can't afford to take the server offline and it states the PunchZero command does not work with NFS so I'll try the svMotion and see what happens!

Thanks.

0 Kudos
kastlr
Expert
Expert

Hi,

 

to the best of my knowledge vmdks on NFS datastores will always be thin.

Can't you use the NFS array vendor provided vSphere plugin to reclaim space?


Hope this helps a bit.
Greetings from Germany. (CEST)
0 Kudos
JDMils_Interact
Enthusiast
Enthusiast

The space used is within the OS filesystem thus the underlying array has no knowledge of the "free" space available. I will check with the storage team to see if the array can do this.

However, we are going from NFS to NFS datastores and the arrays are setup for 4K blocks so the only option I have is to add iSCSI VMkernels to the hosts, the storage guys will add a new controller with iSCSI setup, then provide a VMFS datastore which I should then be able to format with an 8K block size. From here I should be able to reduce the disk footprint on the datastore by performing the migration to VMFS and then I will have to re-migrate to the final resting place on NFS.

Let's see how this goes!

0 Kudos