I have a vSphere 5 infrastructure based on ESXi 5.0 hosts.
Hosts are connected to some VMFS datastores and 2 NFS datastores.
While migrating powered off VMs (thru the "migrate" option) to one of the 2 datastores I see the format of the vDisks on the target datastore as "Thick provision eager zeroed".
I double checked that while migrating I did specify "Thin" as the target disk format.
I need to save as much space as possible on the target NFS datastore.
The format of other vdisks on the same NFS datastore is "Thin".
What can I do in order to force the vDisks format while migrating VMs to the target datastore to be "Thin"?
How can I troubleshoot this problem?
Thank you for the pointer.
The problem is that I do specify the Thin provisioning virtual disk format and the result is a Thick disk.
So there is something wrong, but I can't find it...
How can I troubleshoot the problem?
Thin/thick provisioning depends on NFS storage type/configuration/VAAI/vSphere version/...
from KB 1005418:
With NFS datastores, the provisioned disk format will be thin provisioned by default which cannot be changed. With vSphere 5.0, you can specify the provisioned format. For example, you can specify thick provisioning if the storage array or filer supports it via VAAI.
Try to check the storage setup first.
I do want to have Thin disks.
Why do I see Thick disks?
Even more strange, if I look at the files inside the NFS datastore they look to be something between thin and tick, even if I am not sure...
There is something wgon, but I need to troubleshoot what is wrong...
The storage is an HP StoreEasy server.
I made a check and the data look to be thik.
On the same storage I see thin VMs that look to be and are thin.
Even if I crerate a new VM and specify the format as thin the disks are created as thick.
It looks like there is some settings to force the creation of thick disks, maybe at datastore level, but I can't figure where it is...