For converting thick to thin using SvMotion, the destination datastore selected should be a different one than the source. Just changing the provisioning type will not work. Let us know if this is how you are doing it.
I am using a different destination datastore. It's only the disks attached to the paravirtual adapter that are failing.
what error do you get?
I don't get any errors, which is quite frustrating. It just doesn't work.
Did you already tried via PowerCLI?
Use Move-VM cmdlet with the -DiskStorageFormat parameter.
Example: Get-VM -Datastore <Datastorename> | Move-VM -Datastore <Datastorename> -DiskStorageFormat thin
makes no difference.
PS C:\Users> get-vm -name relimageatl01 | move-vm -Datastore at1_relativity_03 -DiskStorageFormat Thin
Name PowerState Num CPUs MemoryGB
---- ---------- -------- --------
RELIMAGEATL01 PoweredOn 8 16.000
Description Type Date Time Task Target User
"Migration from host at1esx21.kilpatrickstockton.ks, at1_relativity_02 completed" Information 10/12/2018 8:55:12 AM RELIMAGEATL01
As you can see, no errors. It just silently fails. As I've mentioned, this failure happens ONLY for disks attached to the paravirtual SCSI adapter. I can thick to thin & vice-versa all day long for disks attached to the other types of SCSI controllers.
Since it seems to be a bug, I have opened a case with VMware and will follow up here in case anyone encounters it.
I just wanted to follow up on this to post the results from my case with VMware.
If you looked at these individual VMDKs that were attached to paravirtual adapters, they were already listed as thin in the .vmdk files despite being thick in the client display and in disk usage, where du showed it was consuming the full amount of the provisioned disk. So we had to go back in time to use SDelete.
# The Disk Data Base
ddb.thinProvisioned = "1"
If we ran it to zero out unused blocks in the guest Windows OS, then we did another svMotion to thin after that completed, it actually changed things that time. I ended up with things correct both in the client display and in terms of actual disk usage. This has worked on two VMs so far. So it certainly seems like some sort of bug, but at least we have a workaround. We use NetApp NFS storage and have the current NetApp plugin installed on these hosts.