VMware Cloud Community
Kevin__
Contributor
Contributor
Jump to solution

Change 'object space reservation' for one or more vm's

Hi,

We have a problem with space consumption while migrating VM's from our old to a new cluster and found out that Deduplication and compression is enabled but without savings:

Deduplication and compression is enabled and we have a RAID-5 storage policy set as default with 'object space reservation' set to 100.

Multiple thin provisioned VM's have been migrated with host and storage vMotion from a vSphere Enterprise 5.5 cluster to a vSphere Enterprise plus 6.0 with vSan Enterprise 6.2 cluster with this default storage policy.

We found out that Deduplication and compression won't work with thick disks and/or 'object space reservation' set to 100. vSan shows: savings is 0 bytes and ratio 1x. There is just one datastore and one disk group and we use all flash storage on this new cluster.

Because it's live data (running VM's) what can we do to change the 'object space reservation to 0 so Deduplication and compression will work? (VM's we migrated were thin provisioned on the old cluster)

I can’t find any KB articles about changing the default storage policy for one or multiple VM’s. Would it be that easy to create a new policy with 'object space reservation' set to 0 and apply to one or more VM’s? We need to have a good workaround/fix and can’t lose data.

0 Kudos
1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello Kevin,

Yes, you can create a new Storage Policy (SP) with the same Rule sets but OSR=0 (e.g. FTT=1, FTM=RAID5, SW=1, OSR=0) via Home > Policies & Profiles > VM Storage Policies

You can also clone an existing SP and just edit the OSR rule to be OSR=0 .

Then you can apply this new SP to VMs via Right-click VM > VM Policies > Change SP and Apply to all

Or

Right-click VM > Edit Settings > Select the Hard Disks and change SP per disk here .

This can be done against multiple VMs at once from

Home > Policies and Profiles > VM Storage Policies > Select SP > VMs > then Click Shift-Click to highlight multiple VMs > Right-click > VM Policies > Edit VM Storage Policies > Select SP

This should cause no impact on the accessibility of the data or the performance of running VMs but do check if this is resulting in significant resync via RVC/Web Client and if the additional IOPS is causing too much contention then allow the resync to complete before proceeding with more (or very large) VMs and/or do less VMs/disks at a time.

docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-9A3650CE-36AA-459F-BC9F-D6D6DAAA9EB9.html

yellow-bricks.com/2014/03/22/vsan-basics-changing-vms-storage-policy/

Bob

View solution in original post

0 Kudos
7 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello Kevin,

Yes, you can create a new Storage Policy (SP) with the same Rule sets but OSR=0 (e.g. FTT=1, FTM=RAID5, SW=1, OSR=0) via Home > Policies & Profiles > VM Storage Policies

You can also clone an existing SP and just edit the OSR rule to be OSR=0 .

Then you can apply this new SP to VMs via Right-click VM > VM Policies > Change SP and Apply to all

Or

Right-click VM > Edit Settings > Select the Hard Disks and change SP per disk here .

This can be done against multiple VMs at once from

Home > Policies and Profiles > VM Storage Policies > Select SP > VMs > then Click Shift-Click to highlight multiple VMs > Right-click > VM Policies > Edit VM Storage Policies > Select SP

This should cause no impact on the accessibility of the data or the performance of running VMs but do check if this is resulting in significant resync via RVC/Web Client and if the additional IOPS is causing too much contention then allow the resync to complete before proceeding with more (or very large) VMs and/or do less VMs/disks at a time.

docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-9A3650CE-36AA-459F-BC9F-D6D6DAAA9EB9.html

yellow-bricks.com/2014/03/22/vsan-basics-changing-vms-storage-policy/

Bob

0 Kudos
Kevin__
Contributor
Contributor
Jump to solution

Hi Bob,

Thank you for your reply.

I created the new policy and applied the policy to some VM's and this worked as you described.

This can be done against multiple VMs at once from

Home > Policies and Profiles > VM Storage Policies > Select SP > VMs > then Click Shift-Click to highlight multiple VMs > Right-click > VM Policies > Edit VM Storage Policies > Select SP

This option is not available is vSpehere 6.0 with vSan 6.2, vSphere 6.5 only? Per VM is no problem in this case.

The output after the policy change in RVC:

/localhost/***/computers/***cluster> vsan.vm_object_info ./resourcePool/pools/2.\ ***/vms/***machine***

VM *****machine*****:

  Namespace directory

    DOM Object: 6a2eb159-e850-8973-147b-246e966a36b8 (v3, owner: *****, policy: spbmProfileGenerationNumber = 1, hostFailuresToTolerate = 1, spbmProfileId = 22b717ed-6844-4353-882a-c4cceec2f069, proportionalCapacity = [0, 100], replicaPreference = Capacity, stripeWidth = 1)

      RAID_5

        Component: 6a2eb159-9944-f073-4780-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abec9, ssd: naa.50000397cc9033dd,

                                                         votes: 2, usage: 0.2 GB)

        Component: 6a2eb159-9c28-f273-bbc8-246e966a36b8 (state: ACTIVE (5), host: v*****, md: naa.50000397bc8abebd, ssd: naa.50000397cc9033c1,

                                                         votes: 1, usage: 0.1 GB)

        Component: 6a2eb159-52aa-f373-49a5-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abeb5, ssd: naa.50000397cc90341d,

                                                         votes: 1, usage: 0.2 GB)

        Component: 6a2eb159-d118-f573-5563-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abf01, ssd: naa.50000397cc90339d,

                                                         votes: 1, usage: 0.2 GB)

  Disk backing: [*****-Datastore] 6a2eb159-e850-8973-147b-246e966a36b8/*****.vmdk

    DOM Object: 712eb159-df4f-dd2f-515d-246e966a36b8 (v3, owner: *****, policy: spbmProfileGenerationNumber = 1, hostFailuresToTolerate = 1, spbmProfileId = 22b717ed-6844-4353-882a-c4cceec2f069, replicaPreference = Capacity, proportionalCapacity = 0)

      RAID_5

        Component: 712eb159-6b8a-5330-b613-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abae9, ssd: naa.50000397cc9033f5,

                                                         votes: 2, usage: 13.3 GB)

        Component: 712eb159-c981-5530-c74c-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abec9, ssd: naa.50000397cc9033dd,

                                                         votes: 1, usage: 13.3 GB)

        Component: 712eb159-bd10-5730-1d9d-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abebd, ssd: naa.50000397cc9033c1,

                                                         votes: 1, usage: 13.3 GB)

        Component: 712eb159-197e-5830-d000-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abf01, ssd: naa.50000397cc90339d,

                                                         votes: 1, usage: 13.3 GB)

So the DOM object wil stay 100 but the disk is now set to 0. If needed, only a storage migration (to another datastore) will change this?

Regards,

Kevin

0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello Kevin,

That looks fine, we see on the vmdk Object "proportionalCapacity = 0" - namespace Objects are different and will always appear as "proportionalCapacity = [0, 100]" in RVC, this is expected behaviour.

Bob

Kevin__
Contributor
Contributor
Jump to solution

Hi Bob,

Again thank you for your reply.

If a VM is thick provisoned on the current storage cluster and moved with storage vMotion and the correct vSan policy (with object space reservation set to 0) will this disk become thin ?

And how can I check this for VM's that are already migrated to the new storage cluster? Cause if the disk(s) are thick already and 'same as source' was selected during storage vmotion, this won't change anything to the thick disks?

0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello Kevin,

If you change the destination Storage Policy to a thin one it *should* thin the vmdks.

If an Object has a thin Storage Policy applied and is compliant it *should* be thin, however are a few reasons this may not occur.

I have found that the simplest way to check for thick Objects is to use RVC, cd to the cluster level and run:

> vsan.vm_object_info ./resourcePools/vms/*

(increase your session screen buffer to a few thousand lines before running this)

Then copy this into Notepad++ and ctrl+f for all instances of  "proportionalCapacity = 100"

This will assist in identifying which vmdk Objects are thick.

Change the path accordingly if your VMs are in resource pools (and run it individually for each pool)

Bob

0 Kudos
Kevin__
Contributor
Contributor
Jump to solution

Great thank you!

One more question.

As said in the start post all VM's were moved via storage vMotion to the vSan cluster but with the wrong storage policy (OSR set to 100).

Last night I've updated all VM's to the new storage policy (OSR set to 0). The savings and ratio values are changing slowly...

As deduplication is done near line (Slide 20: VMworld 2017 - Top 10 things to know about vSAN)​ and we moved all VM's with the wrong storage policy, is depulication and compression done in the most efficiënt/best way? Cause the current savings and ratio aren't that good (see image below). We know we should be able to have a ratio of at least 2:1 and even higher.

What do you recommend, just wait (time x?) (the vSan cluster is full flash)

Or a storage vMotion for all current VM's (to another datastore/cluster and back) with the right vSan storage policy (thin) to force deduplication again?

Schermafbeelding 2017-09-19 om 14.17.05.png

Regards,
Kevin

0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello Kevin,

That's pretty corner-case and is a good question.

As it dedupes data as it is committed to capacity the blocks would have to be written for them to be deduped - whether this occurred or not depends on whether the LSOM componenents of the VMs were re-created when you changed the Storage Policy.

Try storage-migrating a few VMs off and back onto this vSAN cluster to see if the ratio increases at all, if it does then proceed with more VMs.

Do note that deduplication generally doesn't start seeing higher levels of return until disk-groups are higher utilized (e.g. ~70% used).

Bob

0 Kudos