I generally allow for upto 4 low/average use VMs per 3GHz core and reserve one core for the service console. Also you should have about 1GB per VM.
Around 8-10 VMs per physical NIC.
But this is all relative to what you are running on each VM. You also have to keep the configuration maximums in mind
Regards
Leafy911
(Dont forget you recieve points when you award points)
I have run lots of Vms per host (up to 20) however the recommendations are usually around 10 per host (dependant of resource usage). It is however important to make sure you are running an n+1 config so if you have 5 hosts running at 80% and have a failure of one host then you will have to share 20% across the other 4 hosts. This would still be high though( almost 100%) used so therfore the initial setup should be below 80% per node in this configuration. In general memory is the biggest limiting factor so be aware of that.
As the others said it is really depending on the load your VMs produce and how many CPU-Cores your ESX-Host has. In one of our clusters we use DL585 with 4 Dual-Core Opterons. The VMs normally produce small load and so upto 40-50 VMs per host are possible. Generally we have a rule to put a maximum of 4-5 virtual CPUs per physical core.
The absolute maximum of virtual CPUs per host is 192 with ESX 3.5 Update 2 ( http://www.vmware.com/pdf/vi3_35/esx_3/r35u2/vi3_35_25_u2_config_max.pdf )
CU
I would say, that it all depends on your workload.
I have 6 8-way (single core w-32GB RAM) boxes running anywhere from 55 to 60 VM's each.
Also, keep in mind, 3.5 Update 1 (or better) will allow for up to 192 vCPU's. So depending on the number of multi-vcpu boxes you have that will lower the total VM count.
I just racked 2 new 4-way (quad core w-128GB RAM) boxes, and I'm wondering how many I'll get there.
Gotta love new toys.
Jase McCarty
Co-Author of VMware ESX Essentials in the Virtual Data Center
(ISBN:1420070274) from Auerbach
Have a look at this thread, Troy Clavell has a very big site with around 133 VMs per host.
http://communities.vmware.com/message/1049753
Regards
Leafy911
(Dont forget you recieve points when you award points)
Cameron's point is the most critical one here - you're going to get varying degrees of performance with different hardware and guest function. When we first deployed Vmware, we were consolidating grossly underutilized servers (license, special applications, etc...)
We assigned 1Gb per guest, and each host had 32Gb of RAM - with 5 hosts in the cluster, that meant 150 (assuming overhead for SC) was the maximum number of guests. However n-1 = 4 x 30 = 120 is the maximum number we decided for that cluster, because we had to assume at some point we'd have a failure (or we'd want to migrate VMs and upgrade the host during the day).
Max.
How about VM's with 256MB/512MB/768MB on a 32GB host?
For a long time, we had:
20VM's with 768MB of RAM = 15GB of RAM
20VM's with 512MB of RAM = 10GB of RAM
20VM's with 256MB of RAM = 5GB of RAM
So, now we are at a total of 30GB of RAM, with a little headway for the SC (800MB).
Don't get me wrong, it is a little tight, and DRS comes in very handy when it comes to best moving around which VM's are really being taxed.
Jase McCarty
Co-Author of VMware ESX Essentials in the Virtual Data Center
(ISBN:1420070274) from Auerbach
> Have a look at this thread, Troy Clavell has a very big site with around 133 VMs per host.
It doesn't matter how many someone else has running, they could be very small VM's. Quad Core processors allow more consolidation than Dual Core (in our experience). They key is the processors. Not VM's. 4 vCPU per Core is a recommended guide line and not absolute. As Jase said, it depends on workload, but Memory is irrelevant, because the more Windows (similar kernels) the more shared memory you have in the end. CPU is the first and foremost factor on any VM Server / Host. Memory can be adjusted tweaked on VM's, and we pretty much ignore it, also considering the swap that takes place for VM's they get their own memory anyway, but even if the host swaps, typical config is from the SAN, swap occurs on local ESX host (very fast disk speeds, you won't notice thrashing).
It all boils down to micromanagement and documentation. You should know your environment. Every VM is different, every host is different, every cluster is different, every configuration is different, and we can argue this point until we are blue in the face, one config may not work well with ALL environments, that's why you TEST first to see how the VM's react, add more, lather rinse repeat. That's really the only way to find out how many VM's you can put on the host. Then when all is said and done, and a host fails, and you are close to your limit, you are in trouble, that's why you want to leave some padding room.. no more than 70% consolidation, because in the event something happens, you won't be able to shift your VM's elsewhere.... So it's more than just nuimbers, its about planning, good implementation, and testing. The hardware is only as good as you anticipate it SHOULD be.
Hello,
THe absolute max is 128 vCPUs per host. THat could be only 32 4 vCPU VMs. But remember you want to only max out your machines to at most an 80% utilization of CPU, Disk, and Network. Memory is not really an issue, just try not to overcommit memory. 80% utilization for your workload could be only 2 VMs. Why 80%? It is the empirical number where there is enough headroom for migrated VMs and enough left over resources for inevitable spikes in utilization.
I know some companies that only want to achieve a 50% utilization. Your goals need to be well understood and documented.
Your documentation will stand you in good stead as it will give you a guide for how to fill up the ESX hosts. For example, I only have at most 20 VMs running per host, but that is well below the 80% limit. Most of my customers will not go above 60%. That is their choice.
The idea behind the utilization percents is that this tells you when you need a new host and not just the max load upon which you feel comfortable. Comfort has quite a bit to do with it as well. But as RParker stated, you really need to have all this documented and by in from management so that you can get the 'new' host when necessary.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
I agree with everything you said Texiwill... except:
192 vCPUs per Host - VMware now supports increasing the maximum number of vCPUs per host to 192 given that the maximum number of Virtual Machines per host is no more than 170 and that no more than three virtual floppy devices or virtual CDROM devices are configured on the host at any given time. This support is also extended on ESX Server 3.5 Update 1.
Here's the link:
Jase McCarty
Co-Author of VMware ESX Essentials in the Virtual Data Center
(ISBN:1420070274) from Auerbach
I haven't seen my buddies out there mention that you may also take into consideration that you want to be able to failover if a host goes down, you want extra space on the other hosts in that cluster. If you fill it up you won't have much benifit on that cluster.
Think about that as well.
Matthew
Kaizen!
Actually, Texiwill mentioned that you shouldn't go above 80% utilization...
With that being said... the 128/192 vCPU limit (I would think) should be taken into account for accomodating VMs (total vCPU count) in a failover situation more importantly than in a normal operation situation.
Jase McCarty
Co-Author of VMware ESX Essentials in the Virtual Data Center
(ISBN:1420070274) from Auerbach
Hello,
I have yet to see anyone have running that many vCPUs but they would only run on the very very large boxes...... DL785G5, DL58xG5/G6..... THanks for the updated information Jase.
Part of your documentation would be about how you would handle any failure case. Most people buy N1 servers so that they have the capacity. I have known people to buy N2 servers with the spare +1 locked in a closet just in case.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
You got it Jase, Texi did say that, I just thought it was worth spelling out. You know me, I always say, "When you feel like you are over communicating, you are at just the right level."
Kaizen!
I noticed that about you Mathew Respect, you don't say much but what you say is useful.
Tom Howarth
VMware Communities User Moderator