Migrating a pRDM to VVOL causes the VMDK to occupy 100% of original LUN size even if you opt to thin provision during the svMotion during a cold migration. Why is svmotion ignoring the thin request? Even stranger the vmdk shows as thin provisioned even though it is not, and on top of that the size of the vmdk is reported incorrectly to vcenter and on the host, even though in guest the data is still in tact.
On a 7.03e cluster connected to a Nimble iSCSI array.
Both array and esxi show the disk is thin provisioned. Why is this happening? Also strange if you browse the VVOL datastore in GUI the VMDK disk size reports 0KB.
Migrating Physical RDM to vVol, Select Configure disk as thin provision.
Disk was migrated to vVol Successfully but even though its thin provisioned, the Nimble vVol volume used space shows fully utilized.
Observation -
Windows OS Disk - 0.08GB / 10GB
VMDK on browsing vVol on vSphere - 0KB
vVol volume on Nimble Array - 10GB/10GB
vVol Volume and vSphere shows VMDK as thin provisioned.
Space reports the following after running TRIM / DEFRAG on windows OS.
Windows OS Disk - 0.08GB / 10GB
VMDK on browsing vVol on vSphere - 0KB
vVol volume on Nimble Array - 2GB/10GB
vVol Volume and vSphere shows VMDK as thin provisioned.
Still wrong but slightly better.
vMotion the vVOL back to VMFS and back to vVOL again resolves the issue.
I ran a Defrag / Trim on the guest OS and it did lower down the volume usage on the Nimble Array.
Although it's still incorrect, that being said, it proves that the conversion from Physical RDM to vVOL vmdk caused it to somehow write zeros filling the drive.
> Although it's still incorrect, that being said, it proves that the conversion from Physical RDM to vVOL vmdk caused it to somehow write zeros filling the drive.
Not sure if it works for vVOL vmdks - try
vmkfstools -p 0 name-flat.vmdk > mapping.txt
That will at least show wether the vmdk is assembled useing real zeroed-fragments or fragments linked to /dev/zero
Ulli