Fdespres
Contributor
Contributor

Can't change storage policy Thin to thick provisionning vSAN 6.7

Hi,

I use vSAN storage policy configured with Thin provionning on all my VMs and I have 2 problems.

First, a great number of my VMs seems don't apply the Thin. When I check in Datastore, provisionned and used space are equals. Though, when I look at the disk on my VMs,Thin provisionning is applied.

Second, I created a storage policy with RAID5 and thick provisionning. If I try to apply this policy on a VM, I have got a non compliance answer. The error is in the vSAN object reservation : Expected value 100, Current value 0.

I don't understand how apply my policy !

Thanks for your help.

0 Kudos
4 Replies
Fdespres
Contributor
Contributor

My status isn't out of date, I can't reapply the storage policy. And I have only one datastore that is a vSAN and all my storages policy are compliant with it.

 

 

0 Kudos
TheBobkin
VMware Employee
VMware Employee

@Fdespres , What build version of ESXi is in use here? (and if you have multiple clusters on different builds then inform builds of the problematic ones)
Are you sure these VMs don't have Thick-provisioned in the vmdk disk-primitive settings? (which are no longer selectable in 6.5 and later)? Common causes and checks relating to this is documented here:
https://kb.vmware.com/s/article/66758
https://kb.vmware.com/s/article/2145798

 

"provisionned and used space are equals"
This alone does not indicate that they are thick (via any means) - If a VM has filled its vmdk then it will look like this even with a thin SP. Also, vSAN doesn't Trim/Unmap from Guest-OS unless this is configured and run on the Guest-OS so if it ever filled its vmdk (and then freed up some space but no Trim/Unmap run) then it will appear this way also.
Can you share/PM the output for one of these VMs from esxcli vsan debug object list --vm-name 'VMNameHere' ?

 

"Second, I created a storage policy with RAID5 and thick provisionning. If I try to apply this policy on a VM, I have got a non compliance answer."
Can you create a new VM and try applying this SP to it? I ask as with issues such as these it is crucial to isolate the scope or range of the issue (e.g. can you apply new SPs compatible with vsandatastores at all, if not which ones and if not in all clusters or just one, if can create new SPs, can they be applied successfully to new Objects/VMs vs existing ones, if only new ones then you need to find out what is preventing this for the existing data or an alternative solution to the problem).

0 Kudos
TheBobkin
VMware Employee
VMware Employee

@Fdespres , What build version of ESXi is in use here? (and if you have multiple clusters on different builds then inform builds of the problematic ones)
Are you sure these VMs don't have Thick-provisioned in the vmdk disk-primitive settings? (which are no longer selectable in 6.5 and later)? Common causes and checks relating to this is documented here:
https://kb.vmware.com/s/article/66758
https://kb.vmware.com/s/article/2145798

"provisionned and used space are equals"
This alone does not indicate that they are thick (via any means) - If a VM has filled its vmdk then it will look like this even with a thin SP. Also, vSAN doesn't Trim/Unmap from Guest-OS unless this is configured and run on the Guest-OS so if it ever filled its vmdk (and then freed up some space but no Trim/Unmap run) then it will appear this way also.
Can you share/PM the output for one of these VMs from esxcli vsan debug object list --vm-name 'VMNameHere' ?

"Second, I created a storage policy with RAID5 and thick provisionning. If I try to apply this policy on a VM, I have got a non compliance answer."
Can you create a new VM and try applying this SP to it? I ask as with issues such as these it is crucial to isolate the scope or range of the issue (e.g. can you apply new SPs compatible with vsandatastores at all, if not which ones and if not in all clusters or just one, if can create new SPs, can they be applied successfully to new Objects/VMs vs existing ones, if only new ones then you need to find out what is preventing this for the existing data or an alternative solution to the problem).

0 Kudos