VMFS block size

    Block size and vmdk size limitation

    When you create a VMFS datastore on your VMware ESX servers many administrators select the default 1MB block size without knowing when or why to change it. The block size determines the minimum amount of disk space that any file will take up on VMFS datastores (actually with VMFS3 there is a sub-blocks allocation, see later). But the block size also determines the maximum size that any file can be, if you select a 1MB block size on your data store the maximum file size is limited to 256GB. So when you create a VM you cannot assign it a single virtual disk greater then 256GB.

     

    See also: http://kb.vmware.com/kb/1003565 - Block size limitations of a VMFS datastore

     

    There is also no way to change the block size after you set it without deleting the datastore and re-creating it, which will wipe out any data on the datastore.

    Because of this you should choose your block size carefully when creating VMFS datastores. The VMFS datastores mainly contain larger virtual disk files so increasing the block size will not use all that much more disk space over the default 1MB size. You have the following choices when creating a datastore:

     

    • 1MB block size – 256GB maximum file size

    • 2MB block size – 512GB maximum file size

    • 4MB block size – 1024GB maximum file size

    • 8MB block size – 2048GB maximum file size

     

     

     

     

     

    What's new in VMFS5

    Actually the big change is that VMFS5 can support disks and partitions with more than 2 TB of space.

    For disk greater than 2 TB a GPT partion schema is used to bypass the limit of tradition MBR schema.

     

    But from block size prespective, now blocks of NEW VMFS5 datastore are fixed to 1MB without any option to change the block size.

    Only for datastores upgraded from VMFS3 the block size could be different (depending on the original block size).

     

    For more info on new VMFS5 filesystem see "vSphere 5.0 Storage Features - VMFS-5":

    http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html

     

     

     

     

     

    Block size and performance

    Besides having smaller files use slightly more disk space on your datastore there are no other downsides to using larger block sizes. There is no noticeable I/O performance difference by using a larger block size. When you create your datastore, make sure you choose your block size carefully. 1MB should be fine if you have a smaller datastore (less than 500GB) and never plan on using virtual disks greater then 256GB. If you have a medium (500GB – 1TB) datastore and there is a chance that you may need a VM with a larger disk then go with a 2MB or 4MB block size. For larger datastores (1TB – 2TB) go with a 4MB or 8MB block size. In most cases you will not be creating virtual disks equal to the maximum size of your datastore (2TB) so you will usually not need a 8MB block size.

     

    See also: http://www.yellow-bricks.com/2009/03/24/an-8mb-vmfs-blocksize-doesnt-increase-performance/

     

    And what about vStorage APIs for Array Integration (VAAI)? Block size can change performance? Actually there isn't a official answer. I've make some tests, the only with a single storage, so the result could be not clear.

     

     

     

     

     

    Block size and used space

    VMFS3 (from VI 3.x and vSphere 4.x) uses sub blocks for directories and small files with size smaller than 1 MB. When the VMFS uses all the sub block (4096 sub blocks of 64 KB each), file blocks will be used. For files of 1 MB or higher, file blocks are used. The size of the file block depends on the block size you selected when the Datastore was created.

    See also: http://kb.vmware.com/kb/1003565 - Block size limitations of a VMFS datastore

     

    This is an interesting feature that permit to do not wast too much space, also with big block size. But the number of sub-blocks is limited to a little less than 4000 (for more info see also: http://communities.vmware.com/blogs/AndreTheGiant/2010/11/27/vmfs3-sub-block-allocation)

    To check the used space, you can simple use the du command (from command line on the service console or management console), for example du *.vmsd (on a VM without snapshots)

     

     

     

     

     

    Different block size and problems

    By using diffent block size for different datastores, you can have some problems:

     

    Official documents:

    http://kb.vmware.com/kb/1021976  - vStorage APIs for Array Integration FAQ

    http://kb.vmware.com/kb/1031038  - Hot-add fails with the error: File is larger than the maximum size supported by datastore

     

     

     

     

     

    Choosing the block size

    As written before, using different block size for different datastore could be not correct.

    So is not a good criteria choose the block size related to the size of the datastore (for example 2 MB if datastore is 512 GB), also if you have datastore all with the same size. What's happen if you extend the datastore (IMHO I do not like extension of datastore, but is still an option)? That you have large datastore, but vmdk is limited by the block size.

    So which could be the best criteria? As proposed by several people (for example see Duncan Epping note at http://www.yellow-bricks.com/2010/11/23/vstorage-apis-for-array-integration-aka-vaai/ ) the max block size (indipend of the datastore size) could be a good choice.

     

     

     

     

     

    Change block size during installation of ESX

    So, if you need to use the same block size for all datastores, how is it possible to choose a size diffent than the default one (1 MB) for first ESX datastore (the one with the service console vmdk)?

    There isn't an easy way of doing that right now. Given that a number of people have asked for it we're looking at adding it in future versions.

     

    If you want to do this now, the only way to do it is by mucking around with the installer internals (and knowing how to use vi). It's not that difficult if you're familiar with using a command line. Try these steps for changing it with a graphical installation:

     

    • boot the ESX installation DVD


    • switch to the shell (Ctrl-Alt-F2)


    • ps | grep Xorg


    • kill the PID which comes up with something like "Xorg -br -logfile ...". On my system this comes up as PID 590, so "kill 590"


    • cd /usr/lib/vmware/weasel


    • vi fsset.py


    • scroll down to the part which says "class vmfs3FileSystem(FileSystemType):"


    • edit the "blockSizeMB" parameter to the block size that you want. it will currently be set to '1'. the only values that will probably work are 1, 2, 4, and 8.


    • save and exit the file


    • cd /


    • /bin/weasel


     

    After that, run through the installer as you normally would. To check that it worked, after the installer has completed you can go back to a different terminal (try Ctl-Alt-F3 since weasel is now running on tty2) and look through /var/log/weasel.log for the vmfstools creation command.

     

    This problem could be simple solved by using ESXi instead of the legacy ESX. Note also that the next vSphere edition (after the 4.1) will drop the legacy ESX.

     

    Official document:

    http://kb.vmware.com/kb/1012683  - Increasing the block size of local VMFS storage in ESX 4.x during installation

    http://www.yellow-bricks.com/2009/11/11/changing-the-block-size-of-your-local-vmfs-during-the-install/

    http://www.gabesvirtualworld.com/?p=728  Change blocksize of local VMFS

     

     

     

     

     

    Sources

    http://www.yellow-bricks.com/2009/05/14/block-sizes-and-growing-your-vmfs/

    http://www.yellow-bricks.com/2009/11/10/block-sizes-think-before-you-decide/

    http://itknowledgeexchange.techtarget.com/virtualization-pro/choosing-a-block-size-when-creating-vmfs-datastores/

    http://deinoscloud.wordpress.com/2010/07/26/understanding-vmfs-block-size-and-file-size/

    ESX 4 Install localstorage block size?