VMware Cloud Community
mkramerbs
Enthusiast
Enthusiast

pRDM to VVOL thin provisioning madness on 7.0 U3e

Migrating a pRDM to VVOL causes the VMDK to occupy 100% of original LUN size even if you opt to thin provision during the svMotion during a cold migration.  Why is svmotion ignoring the thin request?  Even stranger the vmdk shows as thin provisioned even though it is not, and on top of that the size of the vmdk is reported incorrectly to vcenter and on the host, even though in guest the data is still in tact.

On a 7.03e cluster connected to a Nimble iSCSI array.

  1. Create a 10GB LUN on Nimble
  2. Add as pRDM to Windows Server VM
  3. Boot VM, Init disk, quick format NTFS, mount drive letter.
  4. Copy .08GB of data
  5. Shutdown VM
  6. Array shows utilization of LUN at .08GB
  7. Initiate Storage vMotion to VVOL (thin provisioned)
  8. After migration VVOL shows VMDK usage for migrated disk to be 10GB not .08GB as expected.

Both array and esxi show the disk is thin provisioned. Why is this happening?  Also strange if you browse the VVOL datastore in GUI the VMDK disk size reports 0KB.

 

Migrating Physical RDM to vVol, Select Configure disk as thin provision.
Disk was migrated to vVol Successfully but even though its thin provisioned, the Nimble vVol volume used space shows fully utilized.

Observation -
Windows OS Disk - 0.08GB / 10GB
VMDK on browsing vVol on vSphere - 0KB
vVol volume on Nimble Array - 10GB/10GB
vVol Volume and vSphere shows VMDK as thin provisioned.

Space reports the following after running TRIM / DEFRAG on windows OS.

Windows OS Disk - 0.08GB / 10GB
VMDK on browsing vVol on vSphere - 0KB
vVol volume on Nimble Array - 2GB/10GB
vVol Volume and vSphere shows VMDK as thin provisioned.

Still wrong but slightly better.
vMotion the vVOL back to VMFS and back to vVOL again resolves the issue.

I ran a Defrag / Trim on the guest OS and it did lower down the volume usage on the Nimble Array.
Although it's still incorrect, that being said, it proves that the conversion from Physical RDM to vVOL vmdk caused it to somehow write zeros filling the drive.

0 Kudos
1 Reply
continuum
Immortal
Immortal

> Although it's still incorrect, that being said, it proves that the conversion from Physical RDM to vVOL vmdk caused it to somehow write zeros filling the drive.

Not sure if it works for vVOL vmdks - try
vmkfstools -p 0 name-flat.vmdk > mapping.txt

That will at least show wether the vmdk is assembled useing real zeroed-fragments or fragments linked to /dev/zero
Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos