I have been reviewing a number of guides and trying to understand the expected result of creating a lazy zeroed thick VMDK file on a NFS datastore with VAAI enabled? I have done some testing and it appears that lazy zeroed thick will use space within the FlexVol on a NetApp array. Is this the expected behaviour now with the VAAI primitives being supported on NFS?
The old age question of thin on thin, thick on thin or thin or thick seems to be irrelevant with this configuration. If you want thin provisioning on NFS VAAI enabled datastores, you have to do thin on thin. Am I missing something here?
Here's how I summarized thin/thick disk provisioning:
->Will ONLY occupy space when it is used
~Useful if you want to save space
->But performance is slower
->Occupies space upon creation
-Memory blocks are zeroed out ONLY when needed
-Memory blocks are zeroed out on creation
~Just takes longer to create VM
->>Allows use of Fault Tolerance(FT) in HA Setup
->THIN disk can be converted into THICK Disk
->Thick to thin is not possible
-Thin & Lazy-Thick allocates memory blocks when needed
BUT, in the VM the virtual Hard Disk appears to be what we have
partitioned to it.
**The VM sees the full Hard Drive, but VMWare
actually uses only memory blocks when needed
Oh yeah, think of data zeroing as somewhat like "reformatting" the disk
much like installing a new OS, or adding a new disk that you partitioned - this makes the disk usable
Oh yeah, to make things even simplier:
HIGHEST: Eager-Thick -> Lazy-Thick -> Thin
HIGHEST: Thin -> Thick