pnorthpm
Contributor
Contributor

Space still used after converting to Thin provisioning

Jump to solution

Hello All

So I had a VM that I migrated from an older ESX server to a new ESX 6.5 server

it was migrated initially using thick provisioning.

the drive is 80G and only 15G was being used so I decided to convert it to thin provisioned disk.

After the conversion to thin was done, it was still showing 80G used where I thought it would show 15G used.

after reading up on this I found that even thou it was converted to thin, the unused space on the drive would need to be zeroed out and then I would need to do another Storage Vmotion to fix it.

the VM is a RHEL 5 server, and our storage is a HPE VSA 12.7 release, so we are using the internal drives in our DL360's to create a virtual SAN.

I'm sure it has been done, but i'm a little nervous running anything on these systems because they are production, is there a way to zero out the unused space on the drive without harming the existing data??

What would be the easiest way to do this?

I would hope there is a way of doing this on a live system, and maybe i'm missing something but I can't imagine using thin and not be able to reclaim that space when we clean up the FileSystem.

Thanks for any and all help, you all have been great answering my questions and I really do appreciate it.

Paul.

0 Kudos
1 Solution

Accepted Solutions
continuum
Immortal
Immortal

dont change the bs=1M - it will write 1mb blocks until the mountpoint complains that there is no more free space.
dd will abort then and immediatly after that the dummy-file gets deleted


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

View solution in original post

0 Kudos
7 Replies
continuum
Immortal
Immortal

> is there a way to zero out the unused space on the drive without harming the existing data??
yes - inside the redhat log in as root and go to every mounted partition.
Then create a dummyfile like this and delete it again once the whole partition has been filled
cd /
dd if=/dev/zero of=dummy-file bs=1M; rm dummy-file
Do this when the VM does not have a lot to do.


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
pnorthpm
Contributor
Contributor

Thank you so much for replying to me regarding this.

So just to be clear

on the RedHat Linux server cd to every mount point and run the following command

dd if=/dev/zero of=dummy-file bs=1M; rm dummy-file

will this clear all the unused space?? or just 1M??  can I increase the 1M to say 1000M if I know there is a lot more room to clear??

and then after it is done I would delete the dummy-file correct??

thanks again

0 Kudos
continuum
Immortal
Immortal

dont change the bs=1M - it will write 1mb blocks until the mountpoint complains that there is no more free space.
dd will abort then and immediatly after that the dummy-file gets deleted


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
pnorthpm
Contributor
Contributor

this is great!

thank you so much

After this finishes I delete the file then I would need to storage vmotion to another datastore for the changes to be seen correct??

0 Kudos
continuum
Immortal
Immortal

Yep - I would use vmkfstools -i current.vmdk new.vmdk -d thin.
But svmotion should do the same.


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

0 Kudos
pnorthpm
Contributor
Contributor

Hey continuum,

So I tried running the command you mentioned on my RHEL 5 VM

dd if=/dev/zero of=dummy-file bs=1M; rm -f dummy-file

it filled the / filesystem then deleted the dummy-file just fine

I then did a storage vmotion to a new Datastore and checked but the size still showed being used.

Does the VM need to be powered off before the storage vmotion?

Am I missing something?

should I be doing something different ??

do I need to set anything on my ESXi 6.5 server?

thanks again for all your help I really do appreciate it.

Paul.

0 Kudos
estanev
Enthusiast
Enthusiast

You probably have to remove all zeroed blocks with vmkfstools -K yourVMdisk.vmdk

Removing Zeroed Blocks

0 Kudos