VMware Cloud Community
Gabrie1
Commander
Commander
Jump to solution

How many VMs would you DARE running on one host ???

Hi

We have to order new hosts and discussing about what to buy. We are now running on DL585G2 on 8 cores (4 cpu, dual core) and have about 30 VMs per host running. Now, we could buy DL585G5 with 4 quad core cpu's and put 80Gb RAM in it. But this would give me about 60VMs on one host which would create a huge impact when this one host fails. Even if HA would pick up the crashed VMs, I would still have downtime for 60VM instead of 30. But where should we draw the line?

Any comments from the field???

Gabrie

http://www.GabesVirtualWorld.com
Reply
0 Kudos
1 Solution

Accepted Solutions
adolopo
Enthusiast
Enthusiast
Jump to solution

For this, I think a better question would be: "How many VMs would you DARE run per core", and this is assuming you have no constraints with purchasing/provisioning memory for that given host(s).

But when it comes down to it, it really is a question of how will your infrastructure implementation (from the ground up) affect you later on down the road. Go with blades and a high number of HOSTs? Or go with a low number of HOST's and beefy machines (DL58x). For my experience, I'd have to say I prefer the smaller approach, as migrating 20 sessions off a HOST is (obviously) less of an event then getting say 60 or 70. Some people may say the less HOST's you have to manage the better, but when you reach a certain amount of VM's within a cluster, the differences (HOST-wise) is negligible.

View solution in original post

Reply
0 Kudos
24 Replies
oreeh
Immortal
Immortal
Jump to solution

I assume you are aware of this paper from Massimo dealing with scale up versus scale out?

If not give it a read. Despite its age it is IMHO still valid.

But where should we draw the line?

Probably not what you want to hear... it depends.

Lots of small servers are harder to manage (cluster restrictions, required time frame for patching, ...)

Few big servers are, as you already mentioned, a nightmare when one of them fails and tend to be more expensive.

I personally wouldn't load up a host with more than ~30 VMs unless I had datacenter space constraints or something similar which forces me to do it.

Reply
0 Kudos
VMKR9
Expert
Expert
Jump to solution

Nail on the head oreeh... scale up or out, I would scale out depending on how many hosts you already have?

how much would 80GB of RAM cost... a lot??

Price sweet spot is a quad core server with 16gb of RAM probably run about 20 vms in this no probelm, I have seen 60 vms running on this set up...

Reply
0 Kudos
Gabrie1
Commander
Commander
Jump to solution

Yeah, I knew the "it depends" answer was comming up Smiley Happy

But what would your gut feeling be? When talking about 30 VMs on one host, I'm proud we can achieve these results, when talking about 60, I get a bit scared about uptime.

http://www.GabesVirtualWorld.com
Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

My gut feeling ... no more than ~30 VMs per host.

This way failover capacity isn't too expensive, management isn't a nightmare and :smileyalert: you don't risk your job when a host goes down Smiley Wink

Reply
0 Kudos
VMKR9
Expert
Expert
Jump to solution

My heart says go with as many vms on the host as you can get, my head says 30 is probably close to the limit, I would look for a sweet spot for price and number of vms per host. so maybe 40 vms pers host would be more cost effective and not quite as bad as 60vms failing at once and better than 30 vms.

I would love to see 50 or 60 vms running on each host in a cluster, if you can find the capacity to satisfy failover I say go for it!

Just hope 2 hosts don't fail at once! :smileyshocked:

Texiwill
Leadership
Leadership
Jump to solution

Hello,

This really depends. I know companies that are doing no more than a 10:1 or 20:1 compression, but there are other companies with 50+ VMs running on one box (at the time it was a DL760 with 8 CPUs and 64GBs of memory. I do know that the max vCPUs you can put on a system is still 8 * pCores and the larget box I have seen is the DL580G4 with 4 quad cores (16 cores) and 512GBs of memory..... So maximally 128 vCPUs.....

The depends answer is really the issue... I would load up no more than 30VMs in general but if it is a dev box or a box for a lot of relatively unused low utilization systems I may place 50-60 on one box. granted I would make sure that there is enough capacity else where to pick up the VMs in an HA situation as well.

I would say if someone gave me one of those monster machines I would definitely use it all and add VMs until I hit 80% CPU utilization on the box, ran out of memory, or disk space. Then I would ask for another monster machine as a backup to that one. grin Anyone care to send me a pair of these beasts?


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
williambishop
Expert
Expert
Jump to solution

We run our VDI clients at around 60 to 1 ratio without any issues, and since we put 4 hosts(blades) per desktop cluster, should one host fail, the rest can carry the load.

--"Non Temetis Messor."
Reply
0 Kudos
VMwareSME
Enthusiast
Enthusiast
Jump to solution

It seems the bigger the host, the more financial savings and the more vms it can host. But the risk will increase. So finding that sweet spot for risk vs. financial reward depends largely on how much money you lose if the host goes down.

If it was development in our environment, i'd go with as many vms as possible. (120 is the max.) We have hosts that has 90 vms in dev.

Production, we always go with n+1 method in case of host outage.

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

the larget box I have seen is the DL580G4 with 4 quad cores

Then you should take a look at the HP DL785G5 - eight AMD quad-core CPUs. :smileygrin:

Not sure though if this beast is on the HCL.

Reply
0 Kudos
MalcO
Contributor
Contributor
Jump to solution

Very similar to our setup, we were running 40 VM's on a DL580G4, 4 x dual core CPUs, redundant power supplies, Raid 5 Memory and 72Gb Raid 1 local disk (all other storage provided via a SAN). The only things we were exposed to were a motherboard failure or disk controller. We have now added an identical server and created a HA cluster and increased the nunmber of VM's to 55. Next year we will be replacing these with BL680c to reduce the power and cooling requiremnts.

Reply
0 Kudos
Ken_Cline
Champion
Champion
Jump to solution

I make this decision based on a couple things:

- - How important are the VMs in questions?

- If they're truly "mission critical", then I keep the number small - on the order of 10:1

- If they're "important", then let's look at 20:1

- If they're "who cares if they're up", then load 'em up!

- - How large is the environment? I like to deploy a minimum of two hosts (three makes me happier)

- 20 systems @ 2 hosts = 10:1, @ 3 hosts = 7:1

- 100 systems @ 2 hosts = I wouldn't do it, @ 3 hosts = 34:1

- 1,000 systems - now you're talking! @ 20 hosts = 50:1, @ 30 hosts = 34:1, @ 20 hosts = 50:1, @ 10 hosts = 100:1

- 10,000 systems - you can bet I'm going to have a few hosts with 50 to 60 (or more) VMs and some hosts with 10 (or less) VMs!

So, there's not single "right" answer (other than "it depends") :smileygrin:

Ken Cline

Technical Director, Virtualization

Wells Landers

VMware Communities User Moderator

Ken Cline VMware vExpert 2009 VMware Communities User Moderator Blogging at: http://KensVirtualReality.wordpress.com/
azn2kew
Champion
Champion
Jump to solution

I would use this scenarios for virtual machines depending on roles and purpose.

1. You of course maximize it when it comes to dev/test environment due to not critically important.

2. Production environment with heavy load like Exchange, SQL, Oracle, GIS servers would be around 10-15 VMs due to high resource usuage

3. Production environment with low load like IIS, File, FTP, management servers should be fine between 40-50 VMs.

4. Technically, you should be able to run 64 VMs by using 8 VMs per pCore ratio.

5. If you have 3-6 of these beasts in placed with N+1 design solution than maximize to 80-85% resources would be good with redundancy guaranteed.


If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

To back up Ken's response. Even though you may be doing a large compression ration, the key is that you need enough resources spread between all your hosts to pick up the slack if one host dies completely (bad motherboard for example). You will also need to balance the load across your hosts even so. Say you do a 50:1 compression, you may end up with 34:1 after the balancing act that will progress as the systems are used and resource utilization increases. The absolute max I would ever put on a system in terms of utilization is 80% Utilization for CPU, Disk, and Network IO.

With the bigger systems, it is not necessarily CPU that is an issue but IO that will end up being the big issue.

However in response to Oliver, I still thing the quad cpu quad core with 512GBs of memory is better, I would expect a 8 CPU quad core system to have at most 1TB of memory to qualify as a beast. But hey, if someone still wants to give me a pair of either to play with, I would be quite happy. These boxes end up being almost an entire data center on their own.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
Anders_Gregerse
Hot Shot
Hot Shot
Jump to solution

Well, there is no limit for me, but in order to get reasonable performance I'm trying to get 3-4 vm's per core (in test environments I'll use 4-6 vm's per core). There is also the technology islands problem, where you have servers with different cpu's and need to do cpu feature masking or make clusters based on cpu features. For most installations memory and I/O is the first limit (in cost or performance). Insane contraptions like the DL765G5 are impressive, but hardly suited for virtualization costwise. But I would be able to run all our vm's on one machine, and that is cool in some wicked way.

Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

  • If they're "who cares if they're up", then load 'em up!

If they are of that type they shouldn't reside on an ESX host and take up precious SAN space...

Reply
0 Kudos
williambishop
Expert
Expert
Jump to solution

Unless you're running VDI. There not critical, but they need the performance of a san.

--"Non Temetis Messor."
Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

I would assume that VDI VMs are not of the "who cares" type.

If they are there's something going wrong and someone unnecessarily spent a lot of money :smileygrin:

Reply
0 Kudos
williambishop
Expert
Expert
Jump to solution

I think you are missing the point. Do I care if I lose 15 or 20 vm's? Not really, I have a couple thousand. They are in fact not even worth backing up. I can stand to lose a fair amount of them. But I believe I would be hard pressed to run that many on local storage. As with everything, IT VARIES. You cannot ever take a rule and apply it to everything.

--"Non Temetis Messor."
Reply
0 Kudos
oreeh
Immortal
Immortal
Jump to solution

When we are speaking about thousands of VDI VMs I wouldn't care about a few either.

Reply
0 Kudos