VMware Cloud Community
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

cannot svMotion disk to thin if using paravirtual adapter

Hi,

I am trying to get rid of some thick disks in our environment, and I am consistently seeing that if a thick disk is handled by the paravirtual adapter, I cannot svMotion it to thin. The svMotion goes thru successfully, but it's still thick. Is this expected behavior or some sort of bug? Taking the VMs down and using vmkfstools isn't really an option.

We're on vCenter 6.5u2c and the hosts are either 6.5u1 or 6.5u2. Tools are generally current and virtualHw versions are 10 and higher.

Thanks

0 Kudos
1 Solution

Accepted Solutions
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

I just wanted to follow up on this to post the results from my case with VMware.

If you looked at these individual VMDKs that were attached to paravirtual adapters, they were already listed as thin in the .vmdk files despite being thick in the client display and in disk usage, where du showed it was consuming the full amount of the provisioned disk. So we had to go back in time to use SDelete.

# The Disk Data Base

#DDB

ddb.thinProvisioned = "1"

If we ran it to zero out unused blocks in the guest Windows OS, then we did another svMotion to thin after that completed, it  actually changed things that time. I ended up with things correct both in the client display and in terms of actual disk usage. This has worked on two VMs so far.  So it certainly seems like some sort of bug, but at least we have a workaround. We use NetApp NFS storage and have the current NetApp plugin installed on these hosts.

View solution in original post

0 Kudos
8 Replies
SupreetK
Commander
Commander
Jump to solution

For converting thick to thin using SvMotion, the destination datastore selected should be a different one than the source. Just changing the provisioning type will not work. Let us know if this is how you are doing it.

Cheers,

Supreet

0 Kudos
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

I am using a different destination datastore. It's only the disks attached to the paravirtual adapter that are failing.

0 Kudos
sjesse
Leadership
Leadership
Jump to solution

what error do you get?

0 Kudos
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

I don't get any errors, which is quite frustrating. It just doesn't work.

0 Kudos
RickVerstegen
Expert
Expert
Jump to solution

Did you already tried via PowerCLI?
Use Move-VM cmdlet with the -DiskStorageFormat parameter.

Example: Get-VM -Datastore <Datastorename> | Move-VM -Datastore <Datastorename> -DiskStorageFormat thin

Was I helpful? Give a kudo for appreciation!
Blog: https://rickverstegen84.wordpress.com/
Twitter: https://twitter.com/verstegenrick
0 Kudos
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

makes no difference.

 

PS C:\Users> get-vm -name VM | move-vm -Datastore datastore -DiskStorageFormat Thin

 

 

Name                 PowerState Num CPUs MemoryGB

----                 ---------- -------- --------

VM        PoweredOn  8        16.000

 

Description Type Date Time Task Target User

 

 

As you can see, no errors. It just silently fails. As I've mentioned, this failure happens ONLY for disks attached to the paravirtual SCSI adapter. I can thick to thin & vice-versa all day long for disks attached to the other types of SCSI controllers.

0 Kudos
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

Since it seems to be a bug, I have opened a case with VMware and will follow up here in case anyone encounters it.

0 Kudos
asdfas3e5rqwer
Contributor
Contributor
Jump to solution

I just wanted to follow up on this to post the results from my case with VMware.

If you looked at these individual VMDKs that were attached to paravirtual adapters, they were already listed as thin in the .vmdk files despite being thick in the client display and in disk usage, where du showed it was consuming the full amount of the provisioned disk. So we had to go back in time to use SDelete.

# The Disk Data Base

#DDB

ddb.thinProvisioned = "1"

If we ran it to zero out unused blocks in the guest Windows OS, then we did another svMotion to thin after that completed, it  actually changed things that time. I ended up with things correct both in the client display and in terms of actual disk usage. This has worked on two VMs so far.  So it certainly seems like some sort of bug, but at least we have a workaround. We use NetApp NFS storage and have the current NetApp plugin installed on these hosts.

0 Kudos