I have been told that I should introduce LUNs to my ESX 3 servers in sizes NOT[/b] greater than 300 GB. I was told this is recommended due to disk queue length and the adverse affect it can have on performance when placing too many VMs on the same partition.
Another person told me not to place more than 10 VMs on a VMFS partition for the same reason. Now a 3rd person has told me that this was true of ESX 2.x and earlier due to the flat file system used by VMFS and large disk queue lengths, but he tells me that this has changed with ESX 3. This is his claim:
"VI3 has a more sophisticated file system, allowing for more and larger files, as well as allowing for subdirectories. It is supposed to be able to have more VMs on the VMFS than earlier versions."
My question is, should a VMFS partition be restricted to a certain size and generally how large can that size be without taking a performance hit? Most of our VMs will be servers that perform simple tasks. Some will be running SQL databases, but even those servers will NOT[/b] be high resource, high activity servers.
Also, how many VMs should we restrict to a VMFS partition? I was planning on having up to 25 VMs on a single 400 GB VMFS partition. Would 25 VMs be too many for one VMFS partition? Is 400 GB too large for a VMFS partition?
Generally you should still stick to around 10 - 15 VMs per LUN, but it will really depend on the nature of the VMs and you would want to keep the number of VMDK files to below 30.
If you were thinking of 25 VMs in the 400 GB LUN , you might want to split them across two LUN and if you're using a SAN that would give you a chance to split the I/O across 2 storage processors if that's possible for you.
A few threads to look at. As an aside you also have to watch the block size that you use when your format the partition. It doesn't appear to be the case with your setup, but if you needed a VMDK file greater than 256 GB than you would have to change the default block size when you create the VMFS partition.
http://www.vmware.com/community/thread.jspa?threadID=36725&start=0&tstart=0
http://www.vmware.com/community/thread.jspa?messageID=333672
Generally you should still stick to around 10 - 15 VMs per LUN, but it will really depend on the nature of the VMs and you would want to keep the number of VMDK files to below 30.
If you were thinking of 25 VMs in the 400 GB LUN , you might want to split them across two LUN and if you're using a SAN that would give you a chance to split the I/O across 2 storage processors if that's possible for you.
A few threads to look at. As an aside you also have to watch the block size that you use when your format the partition. It doesn't appear to be the case with your setup, but if you needed a VMDK file greater than 256 GB than you would have to change the default block size when you create the VMFS partition.
http://www.vmware.com/community/thread.jspa?threadID=36725&start=0&tstart=0
http://www.vmware.com/community/thread.jspa?messageID=333672
I think 25 vms are a bit to many (better < 20 by fc - iscsi even less) but the 400 GB size is ok - the question here is how many whole space you will get for the esx servers.
E.g. when you have 1000 GB space then I would prefer 200 GB luns - remember to leave a few space on each lun for e.g. snapshots and swap files. Make one lun for iso images of your os install cds.
Here's some good reference docs....
Configuration Maximums for VMware Infrastructure 3 - http://www.vmware.com/pdf/vi3_301_201_config_max.pdf
SAN Configuration Guide - http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf
SAN Conceptual and Design Basics - http://www.vmware.com/pdf/esx_san_cfg_technote.pdf
SAN System Design and Deployment Guide - http://www.vmware.com/pdf/vi3_san_design_deploy.pdf
Extending a VMFS3 data store - http://www.vmware.com/community/thread.jspa?threadID=65156&tstart=0
To use extents or not? - http://www.vmware.com/community/thread.jspa?threadID=81494&tstart=100
LUNS - http://www.vmware.com/community/thread.jspa?messageID=333672񑝨
LUNS Size - http://www.vmware.com/community/thread.jspa?threadID=36725&start=0&tstart=0
Fyi if you find this post helpful, please award points using the Helpful/Correct buttons.
-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
Thanks, Eric
Visit my website: http://vmware-land.com
-=-=-=-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-
There is no hard and fast rule that you can apply to every environment. Number of LUNS and LUN density can almost be a religous debate, especially if you have a group dedicated to SAN technologies. The SAN people will have their standard best practices and you'll have your own ideas for what LUN sizes you'll need. IMO 300 is a bit too small for rule of thumb VMs. IMO you'll end up managing too many LUNs. Also keep in mind there's a finite limit to the number of LUNs one can present to an ESX host, and multipathing adds a penalty to that equation.
I like to see 500GB or more LUN sizes for standard VM placement to achieve 10-16 VMs per LUN.
Hello,
Sizing a LUN depends on quite a bit. With ESX v2, it ended up being the count of files on the LUN as more files meant slower performance overall.
In essence it really depends on file size. Take for example a LUN size of 400GBs. If your average VMDK size is 20GBs and memory size is 2GBs. you will ATLEAST Take per VM 23 or so GBs of space for all the various little files. Now add in backups and your delta size could be as big if not bigger than your original VMDK. Say it takes 40GBs total when running backups. People could argue this is a high number but it is safer to err on the side of caution. So 40GBs per VM means you can only hold 10VMs on your 400GB partition.
In addition, that number could go up or down based on the type of SAN, the type of write/read IO each VM will have, etc. I tend to run some simple tests of the VMs to decide which size LUN I will use as it is very different for each organization.
This has always been a huge debate/religious argument. I personally like the system to tell me what is best based on my storage and storage usage, plus the pure size of existing per VM files and the possible sizes of some others.
Best regards,
Edward
Message was edited by:
Texiwill
>Also, how many VMs should we restrict to a VMFS partition?
Not so much VMs, but VMDKs. I use a rule of 30 VMDKs max per VMFS, so typically 15 VMs. If you plan on using snapshots or thin disks, reduce this number (due to LUN locking).
Dave
You need to balance IOP's with storage capacity. If a LUN has say 300 IOP capacity per second and you have 10 VM's who push 30 IOP's each then you should make sure that your LUN's are appropriately sized to hold those VM's.
My strategy is to have several LUN's with a decent size and good IOP capacity say 200-500GB and 200-1000 IOP's and spread VM's amongst them to even out the load as you go. Virtual Center does a similar job of this for CPU with its DRS product, but for storage it doesn't know the capacity of the underlying storage and VMotion can't migrate to different storage so you need to do this manually.
Two gotcha's to watch out for: hotspotting and wasted capacity. If you make your LUN sizes large with low IOP capacity you will end up with alot of unusable space - you will run out of IO capacity well before you run out of storage capacity leaving alot of space unusable. Hotspotting is when you have all of your IO going to a single LUN overwhelming that LUN. Consider this: you have 3 LUNS with say 30 VM's on them. As the LUN's get full and the VM's start to run out of space you want to extend the disks in your VM's so you allocate a fourth LUN and extend the disks of 20 of the VM's who disks have filled up onto the new LUN. Since all the free space for those VM's is now on one LUN you now have double the number of write's on that LUN.
This is a bit of an extereme example, but the point is you need to balance storage capacity with performance.
