VMware Cloud Community
SamiEMC
Enthusiast
Enthusiast

Thin or Thick on VMWARE on Thin on Storage ( VMAX ) ?

Hi

Is it Recommended to have Thin on VMWARE on Thin on VMAX , or Thick on VMWARE on thin on Storage ( VMAX or any other Storage ) ?

please advise about Management and Performance and list the Disadvantages

Tags (3)
Reply
0 Kudos
4 Replies
mirceaflorin
Contributor
Contributor

Hi,

  I can tell you that we had a few issues when using Thin Storage. Reason was that we , as VMware admins, saw that there was enough storage available on datastores and, as our internal work instructions for specific customers stated, we created big thin VMs, but unfortunately as the VMs started to grow at some point there were issues as at storage level there wasn't enough storage.

I would say that assigning thick disks when creating a VM would ensure that there is enough storage on the box to avoid issues. But of course, if you have customers that ask A LOT of storage and only use a few gigs, I think it's a waste to assign thick VM disks.

If there is a continuous communication between VMware admins and Storage admins, Thin on Vmware and Thin on Storage should be just fine.

Regards,

Mircea

marcelo_soares
Champion
Champion

Depends on hw many alarms you want to control. I usually keep only one side thin (normally VMware side, its easier for me to administrate). Have in mind that creating pure thick disks will not use the space automatically on the thin LUNs, only thick eager zeroed will do this.

Also, using thin on SAN you may receive errors on the VMware side saying "there is not enough isk space" even if VMFS have space, this is because the lack of space on the thin LUN to grow.

Marcelo Soares
abhilashhb
VMware Employee
VMware Employee

Hey SamiEMC,

Take a look at the below Blog Post by Cormac. Almost all the points regarding thick and thin provisioning both at Hypervisor and array levels are discussed.

Thin Provisioning -  What's the scoop? | VMware vSphere Blog - VMware Blogs

Abhilash B
LinkedIn : https://www.linkedin.com/in/abhilashhb/

Reply
0 Kudos
admin
Immortal
Immortal

Thin on Thin? Where should you do Thin Provisioning

With the new awesome thin provisioning GUI and more flexible virtual disk behavior (hallelujah – no more "clone/template=eagerzeroedthick”!) in vSphere, I’m getting more questions re: best practices when you have the choice of doing it at the array level or the VMware layer.  

This is covered in chapter 6 of the upcoming Mastering VMware vSphere 4.0 that Scott Lowe is authoring (more here).  I’ve guest authored Chapter 6 for Scott.   Chapter 6 is entitled – “VMware vSphere 4.0 - Creating And Managing Storage Devices”

Read on for more details – and there’s LOTS more in the book!

 

Ok – first – some critical understanding:

Virtual Disks come in three formats:

  • Thin - in this format, the size of the VDMK file on the datastore is only however much is used within the VM itself. For example, if you create a 500GB virtual disk, and place 100GB of data in it, the VMDK file will be 100GB in size. As I/O occurs in the guest, the vmkernel zeroes out the space needed right before the guest I/O is committed, and growing the VMDK file similarly.

  image  

  • Thick (otherwise known as zeroedthick) - in this format, the size of the VDMK file on the datastore is the size of the virtual disk that you create, but within the file, it is not “pre zeroed”. For example, if you create a 500GB virtual disk, and place 100GB of data in it, the VMDK will appear to be 500 GB at the datastore filesystem, and contains 100GB of data on disk. As I/O occurs in the guest, the vmkernel zeroes out the space needed right before the guest I/O is committed, but the VDMK file size does not grow (since it was already 500GB)

image

  • Eagerzeroedthick - in this format, the size of the VDMK file on the datastore is the size of the virtual disk that you create, and within the file, it is “pre-zeroed”. For example, if you create a 500GB virtual disk, and place 100GB of data in it, the VMDK will appear to be 500GB at the datastore filesystem, and contains 100GB of data and 400GB of zeros on disk. As I/O occurs in the guest, the vmkernel does not need to zero the blocks prior to the I/O occurring. This results in improved I/O latency, and less back-end storage I/O operations during normal I/O, but significantly more back-end storage I/O operation up front during the creation of the VM.

  In VMware Infrastructure 3.5, the CLI tools (service console or RCLI) could be used to configure the virtual disk format to any type, but when created via the GUI, certain configurations were the default (with no GUI option to change the type) 

  • On VMFS datastores, new virtual disks defaulted to Thick (zeroedthick)
  • On NFS datastores, new virtual disks defaulted to Thin
  • Deploying a VM from a template defaulted to eagerzeroedthick format
  • Cloning a VM defaulted to an eagerzeroedthick format

  This is why the creation of a new virtual disk has always been very fast, but in VMware Infrastructure 3.x cloning a VM or deploying a VM from a template (even with virtual disks that are nearly empty) took much longer.   

Also, storage array-level thin-provisioning mechanisms work well with Thin and Thick formats, but not with the eagerzeroedthick format (since all the blocks are zeroed in advance) - so potential storage savings of storage-array level thin provisioning were lost as virtual machines were cloned or deployed from templates.

Also – BTW – if you have TP at the array level and are using EITHER NFS or VMFS, that clone/template behavior is also why you can save a lot of storage $$ by going to vSphere.

The Virtual Disk behavior in vSphere has changed substantially, resulting in significantly improved storage efficiency - most customer can reasonably expect up to a 50% higher storage efficiency than with ESX/ESXi 3.5, across all storage types.

  • The Virtual Disk format selection is available in the creation GUI
  • vSphere still uses a default format of Thick (zeroedthick), but in the virtual disk creation dialog, there’s a simple radio button to thin-provision the virtual disk (if your block storage array doesn’t support array-level thin provisioning).
  • Also note that there is a radio button to use Fault Tolerance, which employs the eagerzeroedthick format on VMFS volumes.

  image  

Above is the new virtual disk configuration wizard. Note that in vSphere 4 the virtual disk type can be easily selected via the GUI, including thin provisioning across all array and datastore types. Selecting the “Support Clustering features such as Fault Tolerance” creates an eagerzeroedthick virtual disk on VMFS datastores.

Clone/Deploy from Template operations no longer always use the eagerzeroed thick format, but rather when you clone a VM or deploy from a template, this dialog box enables you to select the destination type (defaults to the same type as the source).

image

Also, the virtual disk format can be easily changed from thin to eagerzeroedthick. It can be done via the GUI, but not in a “natural” location (which would be in the Virtual Machine settings screen). If you navigate in the datastore browser to a given virtual disk and right click you see a GUI option as noted below.

clip_image002[6]

You cannot “shrink” a thick or eagerzeroedthick disk to thin format directly through the virtual machine settings in the vSphere client, but this can be accomplished non-disruptively via the new storage vmotion (allowing VI3.x customers to reclaim a LOT of space).

The eagerzeroedthick virtual disk format is required for VMware Fault Tolerant VMs on VMFS (if they are thin, conversion occurs automatically as the VMware Fault Tolerant feature is enabled). It continues to also be mandatory for Microsoft clusters (refer to KB article) and recommended in the highest I/O workload Virtual Machines, where the slight latency and additional I/O created by the “zeroing” that occurs as part and parcel of virtual machine I/O to new blocks is unacceptable. From a performance standpoint, the differences between thick and pre-zeroed for I/Os to blocks that have already been written to perform identically - within the error of margin of the test.

So… What’s right - thin provisioning at the VMware layer or the storage layer? The general answer is that is BOTH.

If your array supports thin provisioning, you’ll generally get more efficiency using the array-level thin provisioning in most operational models.

  1. If you thick provision at the LUN or filesystem level, there will always be large amounts of unused space until you start to get it highly utilized - unless you start small and keep extending the datastore - which operationally is heavyweight, and general a PITA.
  2. when you use thin provisioning techniques at the array level using NFS or VMFS and block storage you always benefit. In vSphere all the default virtual disk types - both Thin and Thick (with the exception of eagerzeroedthick) are “storage thin provisioning friendly” (since they don’t “pre-zero” the files). Deploying from templates and cloning VMs also use Thin and Thick (but not eagerzeroedthick as was the case in prior versions).
  3. Thin provisioning also tends to be more efficient the larger the scale of the “thin pool” (i.e. the more oversubscribed objects) - and on an array, this construct (every vendor calls them something slightly different) tends to be broader than a single datastore - and therefore more efficiency factor tends to be higher.
Reply
0 Kudos