1 Reply Latest reply on Sep 13, 2017 12:20 PM by TheBobkin

    Change 'object space reservation' for one or more vm's

    Kevin__ Lurker

      Hi,

       

      We have a problem with space consumption while migrating VM's from our old to a new cluster and found out that Deduplication and compression is enabled but without savings:

       

      Deduplication and compression is enabled and we have a RAID-5 storage policy set as default with 'object space reservation' set to 100.

      Multiple thin provisioned VM's have been migrated with host and storage vMotion from a vSphere Enterprise 5.5 cluster to a vSphere Enterprise plus 6.0 with vSan Enterprise 6.2 cluster with this default storage policy.

      We found out that Deduplication and compression won't work with thick disks and/or 'object space reservation' set to 100. vSan shows: savings is 0 bytes and ratio 1x. There is just one datastore and one disk group and we use all flash storage on this new cluster.

       

      Because it's live data (running VM's) what can we do to change the 'object space reservation to 0 so Deduplication and compression will work? (VM's we migrated were thin provisioned on the old cluster)

       

      I can’t find any KB articles about changing the default storage policy for one or multiple VM’s. Would it be that easy to create a new policy with 'object space reservation' set to 0 and apply to one or more VM’s? We need to have a good workaround/fix and can’t lose data.

       

        • 1. Re: Change 'object space reservation' for one or more vm's
          TheBobkin Expert
          vExpertVMware Employees

          Hello Kevin,

           

           

          Yes, you can create a new Storage Policy (SP) with the same Rule sets but OSR=0 (e.g. FTT=1, FTM=RAID5, SW=1, OSR=0) via Home > Policies & Profiles > VM Storage Policies

          You can also clone an existing SP and just edit the OSR rule to be OSR=0 .

          Then you can apply this new SP to VMs via Right-click VM > VM Policies > Change SP and Apply to all

          Or

          Right-click VM > Edit Settings > Select the Hard Disks and change SP per disk here .

          This can be done against multiple VMs at once from

          Home > Policies and Profiles > VM Storage Policies > Select SP > VMs > then Click Shift-Click to highlight multiple VMs > Right-click > VM Policies > Edit VM Storage Policies > Select SP

           

          This should cause no impact on the accessibility of the data or the performance of running VMs but do check if this is resulting in significant resync via RVC/Web Client and if the additional IOPS is causing too much contention then allow the resync to complete before proceeding with more (or very large) VMs and/or do less VMs/disks at a time.

           

          docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-9A3650CE-36AA-459F-BC9F-D6D6DAAA9EB9.html

          yellow-bricks.com/2014/03/22/vsan-basics-changing-vms-storage-policy/

           

           

          Bob

          • 2. Re: Change 'object space reservation' for one or more vm's
            Kevin__ Lurker

            Hi Bob,

             

            Thank you for your reply.

            I created the new policy and applied the policy to some VM's and this worked as you described.

            This can be done against multiple VMs at once from

            Home > Policies and Profiles > VM Storage Policies > Select SP > VMs > then Click Shift-Click to highlight multiple VMs > Right-click > VM Policies > Edit VM Storage Policies > Select SP

             

            This option is not available is vSpehere 6.0 with vSan 6.2, vSphere 6.5 only? Per VM is no problem in this case.

             

            The output after the policy change in RVC:

             

            /localhost/***/computers/***cluster> vsan.vm_object_info ./resourcePool/pools/2.\ ***/vms/***machine***

            VM *****machine*****:

              Namespace directory

                DOM Object: 6a2eb159-e850-8973-147b-246e966a36b8 (v3, owner: *****, policy: spbmProfileGenerationNumber = 1, hostFailuresToTolerate = 1, spbmProfileId = 22b717ed-6844-4353-882a-c4cceec2f069, proportionalCapacity = [0, 100], replicaPreference = Capacity, stripeWidth = 1)

                  RAID_5

                    Component: 6a2eb159-9944-f073-4780-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abec9, ssd: naa.50000397cc9033dd,

                                                                     votes: 2, usage: 0.2 GB)

                    Component: 6a2eb159-9c28-f273-bbc8-246e966a36b8 (state: ACTIVE (5), host: v*****, md: naa.50000397bc8abebd, ssd: naa.50000397cc9033c1,

                                                                     votes: 1, usage: 0.1 GB)

                    Component: 6a2eb159-52aa-f373-49a5-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abeb5, ssd: naa.50000397cc90341d,

                                                                     votes: 1, usage: 0.2 GB)

                    Component: 6a2eb159-d118-f573-5563-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abf01, ssd: naa.50000397cc90339d,

                                                                     votes: 1, usage: 0.2 GB)

              Disk backing: [*****-Datastore] 6a2eb159-e850-8973-147b-246e966a36b8/*****.vmdk

                DOM Object: 712eb159-df4f-dd2f-515d-246e966a36b8 (v3, owner: *****, policy: spbmProfileGenerationNumber = 1, hostFailuresToTolerate = 1, spbmProfileId = 22b717ed-6844-4353-882a-c4cceec2f069, replicaPreference = Capacity, proportionalCapacity = 0)

                  RAID_5

                    Component: 712eb159-6b8a-5330-b613-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abae9, ssd: naa.50000397cc9033f5,

                                                                     votes: 2, usage: 13.3 GB)

                    Component: 712eb159-c981-5530-c74c-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abec9, ssd: naa.50000397cc9033dd,

                                                                     votes: 1, usage: 13.3 GB)

                    Component: 712eb159-bd10-5730-1d9d-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abebd, ssd: naa.50000397cc9033c1,

                                                                     votes: 1, usage: 13.3 GB)

                    Component: 712eb159-197e-5830-d000-246e966a36b8 (state: ACTIVE (5), host: *****, md: naa.50000397bc8abf01, ssd: naa.50000397cc90339d,

                                                                     votes: 1, usage: 13.3 GB)

            So the DOM object wil stay 100 but the disk is now set to 0. If needed, only a storage migration (to another datastore) will change this?

             

            Regards,

            Kevin

            • 3. Re: Change 'object space reservation' for one or more vm's
              TheBobkin Expert
              vExpertVMware Employees

              Hello Kevin,

               

               

              That looks fine, we see on the vmdk Object "proportionalCapacity = 0" - namespace Objects are different and will always appear as "proportionalCapacity = [0, 100]" in RVC, this is expected behaviour.

               

               

              Bob

              1 person found this helpful
              • 4. Re: Change 'object space reservation' for one or more vm's
                Kevin__ Lurker

                Hi Bob,

                 

                Again thank you for your reply.

                 

                If a VM is thick provisoned on the current storage cluster and moved with storage vMotion and the correct vSan policy (with object space reservation set to 0) will this disk become thin ?

                And how can I check this for VM's that are already migrated to the new storage cluster? Cause if the disk(s) are thick already and 'same as source' was selected during storage vmotion, this won't change anything to the thick disks?

                • 5. Re: Change 'object space reservation' for one or more vm's
                  TheBobkin Expert
                  VMware EmployeesvExpert

                  Hello Kevin,

                   

                   

                  If you change the destination Storage Policy to a thin one it *should* thin the vmdks.

                   

                  If an Object has a thin Storage Policy applied and is compliant it *should* be thin, however are a few reasons this may not occur.

                  I have found that the simplest way to check for thick Objects is to use RVC, cd to the cluster level and run:

                  > vsan.vm_object_info ./resourcePools/vms/*

                  (increase your session screen buffer to a few thousand lines before running this)

                  Then copy this into Notepad++ and ctrl+f for all instances of  "proportionalCapacity = 100"

                  This will assist in identifying which vmdk Objects are thick.

                   

                  Change the path accordingly if your VMs are in resource pools (and run it individually for each pool)

                   

                   

                  Bob