Hi,
I can tell you that we had a few issues when using Thin Storage. Reason was that we , as VMware admins, saw that there was enough storage available on datastores and, as our internal work instructions for specific customers stated, we created big thin VMs, but unfortunately as the VMs started to grow at some point there were issues as at storage level there wasn't enough storage.
I would say that assigning thick disks when creating a VM would ensure that there is enough storage on the box to avoid issues. But of course, if you have customers that ask A LOT of storage and only use a few gigs, I think it's a waste to assign thick VM disks.
If there is a continuous communication between VMware admins and Storage admins, Thin on Vmware and Thin on Storage should be just fine.
Regards,
Mircea
Depends on hw many alarms you want to control. I usually keep only one side thin (normally VMware side, its easier for me to administrate). Have in mind that creating pure thick disks will not use the space automatically on the thin LUNs, only thick eager zeroed will do this.
Also, using thin on SAN you may receive errors on the VMware side saying "there is not enough isk space" even if VMFS have space, this is because the lack of space on the thin LUN to grow.
Hey SamiEMC,
Take a look at the below Blog Post by Cormac. Almost all the points regarding thick and thin provisioning both at Hypervisor and array levels are discussed.
Thin Provisioning - What's the scoop? | VMware vSphere Blog - VMware Blogs
With the new awesome thin provisioning GUI and more flexible virtual disk behavior (hallelujah – no more "clone/template=eagerzeroedthick”!) in vSphere, I’m getting more questions re: best practices when you have the choice of doing it at the array level or the VMware layer.
This is covered in chapter 6 of the upcoming Mastering VMware vSphere 4.0 that Scott Lowe is authoring (more here). I’ve guest authored Chapter 6 for Scott. Chapter 6 is entitled – “VMware vSphere 4.0 - Creating And Managing Storage Devices”
Read on for more details – and there’s LOTS more in the book!
Ok – first – some critical understanding:
Virtual Disks come in three formats:
In VMware Infrastructure 3.5, the CLI tools (service console or RCLI) could be used to configure the virtual disk format to any type, but when created via the GUI, certain configurations were the default (with no GUI option to change the type)
This is why the creation of a new virtual disk has always been very fast, but in VMware Infrastructure 3.x cloning a VM or deploying a VM from a template (even with virtual disks that are nearly empty) took much longer.
Also, storage array-level thin-provisioning mechanisms work well with Thin and Thick formats, but not with the eagerzeroedthick format (since all the blocks are zeroed in advance) - so potential storage savings of storage-array level thin provisioning were lost as virtual machines were cloned or deployed from templates.
Also – BTW – if you have TP at the array level and are using EITHER NFS or VMFS, that clone/template behavior is also why you can save a lot of storage $$ by going to vSphere.
The Virtual Disk behavior in vSphere has changed substantially, resulting in significantly improved storage efficiency - most customer can reasonably expect up to a 50% higher storage efficiency than with ESX/ESXi 3.5, across all storage types.
Above is the new virtual disk configuration wizard. Note that in vSphere 4 the virtual disk type can be easily selected via the GUI, including thin provisioning across all array and datastore types. Selecting the “Support Clustering features such as Fault Tolerance” creates an eagerzeroedthick virtual disk on VMFS datastores.
Clone/Deploy from Template operations no longer always use the eagerzeroed thick format, but rather when you clone a VM or deploy from a template, this dialog box enables you to select the destination type (defaults to the same type as the source).
Also, the virtual disk format can be easily changed from thin to eagerzeroedthick. It can be done via the GUI, but not in a “natural” location (which would be in the Virtual Machine settings screen). If you navigate in the datastore browser to a given virtual disk and right click you see a GUI option as noted below.
You cannot “shrink” a thick or eagerzeroedthick disk to thin format directly through the virtual machine settings in the vSphere client, but this can be accomplished non-disruptively via the new storage vmotion (allowing VI3.x customers to reclaim a LOT of space).
The eagerzeroedthick virtual disk format is required for VMware Fault Tolerant VMs on VMFS (if they are thin, conversion occurs automatically as the VMware Fault Tolerant feature is enabled). It continues to also be mandatory for Microsoft clusters (refer to KB article) and recommended in the highest I/O workload Virtual Machines, where the slight latency and additional I/O created by the “zeroing” that occurs as part and parcel of virtual machine I/O to new blocks is unacceptable. From a performance standpoint, the differences between thick and pre-zeroed for I/Os to blocks that have already been written to perform identically - within the error of margin of the test.
So… What’s right - thin provisioning at the VMware layer or the storage layer? The general answer is that is BOTH.
If your array supports thin provisioning, you’ll generally get more efficiency using the array-level thin provisioning in most operational models.