vCenter 6.7.0 build-11727113
I initially deployed the virtual machines from template only with base disk which is "Thin' type and post which I converted to "Thick" type using inflate option and in backend I can see the disk is converted and Thick, but in VM summary tab it is still showing as 'Thin'.
The issue is only with the base disk for all virtual machines (All other disks are showing the type as Thick, as these disks were created as Thick while creating them).
I did check the virtual machine backend disk size and see the disk utilization size is showing as per thick type. I validated the vSAN storage policy where the virtual machines are running on and see the OSR value is set to 100% which means the disk type should be thick. I also checked the disk proportional capacity value from RVC console and for all the disks proportional capacity value is 100, which confirms the disk type is thick.
I need a solution to the update the disk type as "Thin" to "Thick" in VM summary tab.
Notes: I am aware in 6.5 and 6.7 any disk with OSR =100 and proportional capacity =100 will be treated as thick at the backend by the applications. So the issue is a display/cosmetic problem. As of now the fix will be rolled out in vSphere 7.0.
But as per my understanding this applies for the scenario where the vSAN space is over utilized, but in my case that is not the issue.
Any help would be appreciated.
Moderator: Moved to vSphere Discussions
Hi
first of all check wether the vmdk still has thin areas.
To do that run
vmkfstools -p 0 name-flat.vmdk > mapping.txt
next run
grep "NOMP" mapping.txt
If that has one or more results the vmdk is still thin.
Next wipe that areas with zeroes - you can ask your colleagues from VMFS-developement on how to do that.
Just kidding - if the mapping.txt has one or more lines with NOMP attach the txt-file to your next reply.
Ulli
Unless I'm missing something, inflating a thin provisioned virtual disk doesn't make it a thick one. Inflating will allocate the provisioned disk space, but that's not the same as e.g. doing storage migration (with another target disk format).
André
Hi
if you want to be sure , use V2V or storage migration. they do it.
Andre - how do you define a thick provisioned vmdk ?
or what is an eager zeroed thick vmdk ?
In the end you have a large number of 1MB blocks eventually mixed with 512MB blocks and any of them is either
thin - means the block is allocated by a link to /dev/zero
lazy - means the block is allocated by a link to an offset on the vmfs-volume using the Z flag
eager - means the block is allocated by a link to an offset on the vmfs-volume.
Thin and lazy blocks turn into an eager block if the guest has written at least one bit to them.
Large blocks a 512 mb are also used but they do not change anything for this matter.
In the screenshot you see the first 3 large blocks a 512 MB of new blank vmdks.
First 2 examples are thin
Next 2 examples are lazy
Last 2 examples are eager
All 3 variants are first completely blank.
Then I wrote to the first MB and show the new allocation.
If you look carefully then you must conclude that only eager zeroed vmdks have a strict definition:
all blocks must use a reference to an offset without Z-flags.
As soon as you write to a thin vmdk it turns into a mixed thin/eager vmdk.
As soon as you write to a lazy vmdk it turns into a mixed lazy/eager vmdk
And also if you write to every MB-block of a thin or lazy vmdk - it will turn into an eager.vmdk.
Now lets look at some special cases: assume a GPT partitioned VMDK.
When the GPT-copy at the end of the disk is located in the last MB-block - then a user can write
to all locations: means a thin or lazy vmdk can after a while of use turn into a true eager.vmdk
When the GPT-copy goes into the third last MB-block - then user operations will typically not result in a true eager.vmdk.
Same thing with gaps between partitions ...
I assume that applications that require eager vmdks will use the strict definition: all blocks must be eager ... ,makes no sense otherwise.
So I think the question I asked : are there still any thin blocks ? is valid.
I guess that some applications keep their own extra information about a vmdk.
Like there is a vmsd-file for the use of secondary functions. And similar to the problems with vmsd-files I bet they have similar issues.
To see wether a snapshot-chain is valid - you need the complete chain of parentfilenamehints to be consistent.
In a similar way you can tell if a eager zeroed vmdk is really an eager zeroed.
And inflating a thin or lazy vmdk can produce a true eager vmdk - it just depends on how you do it
and if you make sure you also inflate the end of a vmdk or the gaps between partitions.
Anyway - a good secondary function or application should be able to check if it deals with true eager or mixed types.
![]()
![]()
![]()
> inflating a thin provisioned virtual disk doesn't make it a thick one
Well not in all cases - but when conditions are lucky user interaction can turn a thin or lazy vmdk into a true eager one.
50 lines of explanation were lost while trying to change the image ...
In short : eager vmdks are the only type that does not appear in a mixed form.
As soon as you write to a thin or lazy vmdk it turns into a mixed thin/eager or mixed lazy/eager vmdk.
Eager vmdks are either true eager vmdks - all blocks reference an offset on a vmfs-volume or no eager vmdk at all.
So checking if a newly converted to eager vmdk still has thin blocks makes sense.
If it still has thin blocks it is still a thin vmdk.
Grrr - I dont write it all again ....
