I know that the performance really varies based on the disk I/O activity of the VM's that you have, but I'm curious to know; Whats the common number of VM's per data store that most of us have?
In my case we usually have 5-7 VM's per data store.
About 30 VMs were running fine on my HP 2012fc 16 SAS disks in RAID6. Now we're moved to EVA 6400 and we're still fine
---
MCSA, MCTS, VCP, VMware vExpert '2009
In our case, where we have some very low I/O XP virtual desktops...we go up to 15 per LUN. I remember being told in the VI 3.5 FastTrack not to exceed 18, and someone else on here recommended no more than 12.
Some of these questions are becoming legendary.
There is no hard and fast rule regarding number of VM's per LUN, a good average is 16 per LUN, however it is really dependant on IOPS, latency and capacity. for example I have had upwards of 60 XP VM's running as linked clones on a single LUN with VDI
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth VCP / vExpert
VMware Communities User Moderator
Blog: www.planetvm.net
Contributing author on "[VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment|http://www.amazon.co.uk/VMware-VSphere-Virtual-Infrastructure-Security/dp/0137158009/ref=sr_1_1?ie=UTF8&s=books&qid=1256146240&sr=1-1]”.
We are an organization with 8000+ employees and we typically carve up 9 disk RAID 5 300GB 15K FC disks per Datastore using four EMC Clariion arrays. We average about 18-20 server-class Virtual Machines per Datastore. These servers range from File/Print to Web Servers to Infrastructure servers. We don't see any issues with performance at the Virtual Machines or users desktop applications. We also don't have issues with SCSI reservations. Now I know its been said that Datastores should be sized between 500GB-1TB for performance purposes (in general); but we get great mileage from our 2TB RAID 5 configs.
VCP 3, 4
Yes, there is no hard and fast rule defined! however 12 max per lun is a good idea to go with active active SAN....
Thanks,,
Ramesh. Geddam,
VCP 3&4, MCTS(Hyper-V), SNIA SCP.
Please award points, if helpful
Geddam, why 12?
Why not 6 or 18?
---
MCSA, MCTS, VCP, VMware vExpert '2009
Using Netapp nfs I try to use a hard limit of 32, but I can do more if I am not using SnapManager for Virtual Infrastructure. The limiting factor there is the number of concurrent snapshot actions during a backup.
Thats the test case I went in for.....
Handling a lock on LUN (Usually called DLM - Distributed locking mechanism) is much tuned from 3.5 onwards. However there is always a suggested practice for sticking with ratio....No. of hosts to no.of Luns, and no. of vms per LUN to no.of locks per LUN.
There is a nice kb which talks about this....1005011.
The test case which I went it was....
1. DMX4 with ESX 3.5 HP c 3000 BC where we experience frequent freezes of LUNs due to many reservation conflicts, in which we were not mainaiting a stoic LUN architecture...Majorly we used to stick with 15 - 18 VMs per LUN (500GB).....
2. After we had word with VMware we got into conclusion of rearchitecting number of VMs per LUN (which they said not to keep more than 12 on active - active SAN). For which we split the 500 GB luns to 250 GB luns and maintained a max of 10- 12 VMs per LUN.
3. I currently work in HP as a Solution Consultant (backline team of VMware Competancy)...This is what I have been recommending our customers to have a better bandwidth for DLM. This majic number 12 works well.
4. Above all, it gives a lot of flexibility for snapshots, documentation on VMFS sizing...
Note: Host doesn;t see the size of LUN at initial interrupt, it does a lot of journaling at initial interrupt. After it is done with journaling and commit data, it releases the lock from .sf files usually located in any VMFS store.
Hope the above information is helpful!
Thanks,,
Ramesh. Geddam,
VCP 3&4, MCTS(Hyper-V), SNIA SCP.
Please award points, if helpful
2TB datastore (not VM Ware recommended)
41-50 VM's per (not VM Ware recommended)
Using Netapp nfs I try to use a hard limit of 32, but I can do more if I am not using SnapManager for Virtual Infrastructure.
The limit is on VMFS volumes, not NFS. NFS is just simply a disk target, it is just a file on a disk, there is no limit for NFS.
Handling a lock on LUN (Usually called DLM - Distributed locking mechanism) is much tuned from 3.5 onwards.
You can have 1 host , 1 VM on 1 LUN and STILL get lock conflicts.. so much for that theory...
What is your bottom line Parker? Looks like you are not happy with VMFS
Thanks,,
Ramesh. Geddam,
VCP 3&4, MCTS(Hyper-V), SNIA SCP.
Please award points, if helpful