You're right there is no best practise in this, but the storage vendors statement "more than 400Gb is crazy" is certainly untrue.
For my environment I've done the following. Inventory of all existing physical servers that needed to virtualized. Collected their disk sizes and came to a X terrabyte needed over Y vm's. Which averages at X/Y Gb per VM in my case 25Gb per VM.
Reading all the forum posts and a number of white papers, I decided that I would never put more then 16 vmdks together on 1 LUN. Which would give me a need for 16 x 25Gb per LUN = 400Gb. Then per VM I would assign 1Gb memory normally, which would also cost 1GB swap space on the lun, so I got to 416Gb, lets say 425GB per LUN. Then I took a 10% margin for snapshots and this got me to around 500Gb per LUN. And this is what I'm using now
Hope this helps.
Per RAID, 0+1 if you have the drive's and 5 as a secondary if you do not have the drive count / need more space...
Another concideration is the most efficient use of your SAN's capabilities, if it has 2 storage processors / controllers, I would suggest splitting the load accross both, obviously if you have a failure then you would fail to a single and have performance impact.
Sizeis a bit more tricky, One thing to keep in mind is you can really "hurt yourself" by filling up a drive, and because of snapshots you can fill-up a drive relativly easily... So don't give up the keys aka alow users / server admins to snap!
Guess the above really applies to any app, not sure how much VM specific best practices there are out there, I heard some thing about 1TB, being a good rule of thumb, but never got any logic behind it... Ohh and as always the more spindels the better
500GB-600GB seems to be the recommended max size around here. Here are some good threads that talk about this....
Sizing LUN for enough free space after VM - http://www.vmware.com/community/thread.jspa?messageID=649338
Smaller LUNS or Larger LUNS - http://www.vmware.com/community/thread.jspa?threadID=90203
Larger LUNS = More Disks = More Performance - http://www.vmware.com/community/thread.jspa?threadID=84843&tstart=0
RAID Recommendations for local disk - http://www.vmware.com/community/thread.jspa?messageID=622456
RAID 5/6 Scaling - http://www.vmware.com/community/thread.jspa?messageID=718641
Which is better, more or fewer disks - http://www.vmware.com/community/thread.jspa?messageID=599611
SAN Configuration Guide - http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf
SAN Conceptual and Design Basics - http://www.vmware.com/pdf/esx_san_cfg_technote.pdf
SAN System Design and Deployment Guide - http://www.vmware.com/pdf/vi3_san_design_deploy.pdf
Fyi...if you find this post helpful, please award points using the Helpful/Correct buttons.
Visit my website: http://vmware-land.com
I usually don't go over 500 for a multiple vm VMFS. If it's for 1 vm only with not a lot of I/O I create it depending on the size the customer want, if it will generate a lot of I/O and needs more flexibility I use RDM's.
According to a session on VMworld this year, the need for RDMs is getting
less and less. They said that the speed difference between vmdk and RDM is
next to none. Also the VMFS overhead is very little. Maybe its worth doing
some testing with the performance differences between them?
the performance gain is little. but enlarging a rdm / lun on the fly is much easier than enlarging a vmfs + vmdk on the fly. or better said, you can't enlarge a vmfs / vmdk on the fly. so this is the little extra flexibility you gain.
If you find this post helpful, please award points using the Helpful/Correct buttons.
For deciding the LUN size of for ESX server,there are several factors
1.The number of virtuals machines which will be hosted on the ESX server.
2.The type of application which will be running on the virtual machines...like Exchange,MS SQL,Oracle etc.Since these applications are I/O intesive application,so it will be better that we should choose the LUN size of not more than 400GB for the optimal performance of the virtual machines.If we are having more space requirment then we can connect more LUN's of small size.
3.The type of software used for the backup of ESX server and the SAN replication.If the size of the LUN's are large then it will take time for the replication of the SAN.
The size of the LUNs is partly dependant on the speed of your SAN. I do mostly 2TB LUNs. One of my VMs has a 800GB second drive, and several are at least 300GB (a few are 4GB)... RAID 10 works much better for holding a larger number of virtual machines than RAID 5 (better random I/O performance with RAID 10, and more VMs is more random). It really doesn't help performance that much if you have one 2TB vmfs, or 5 400GB ones. Earlier version of ESX it made a bigger difference, but even then the main difference is the number of virtual disks per VMFS is more important than the size of each VMFS partition.
You can grow vmdk on the fly (have to shutdown guest, and quick grow). It depends on your SAN if LUN growing is easy or not, but VMDK is always easy to grow... and even if you SAN allows it, it still generally requires the VM to be rebooted to take effect.
How about NFS datastore. VMware recommends not to put too many VMs on the VMFS due to I/O or metadata contention. Does the same guideline apply to NFS datastore? BTW, I know VMware doesn't recommend NFS on the production. But it seems to be working pretty well.
it all depends upon workload characteristic. At least no one would recommend to use Production machines on NFS you might want to save cost on for some temp machines.
We just has implemented esx and had to make a lun design. I spend some time with a vmware implementer and we design some small LUNs for the C: partitions and some other for Data partitions. Now when the Dell SAN engineer came to create the LUNs while installing the SAN he stated that the only way to spread disk IO was between RAID groups within the SAN, saying so he implicitely said that no matter what size were the LUN you have to spread over RAID groups and not LUNs.
Now from an esx point of view i am still wondering if it is relevant to create multiple LUNs.
There are times when the hosts has to lock the disk, and in those situations the number of VMs per LUN make more of a difference than the disks behind those LUNs. This is more of an issue with ESX 2.x than 3.x.
What is the downside (besides management) in having many small single Volumes containing single vms? Is this not a good idea from a performance standpoint?