VMware Cloud Community
jesse_gardner
Enthusiast
Enthusiast

VSphere4 - Maximum 40 VMs per host in large cluster for HA?

While doing research for our Vsphere upgrade, I stumbled upon a concerning "configuration maximum" in VMware's official documentation. In , on page 7, for HA, it says "Configurations exceeding 40 virtual machines per host are limited to cluster size no greater than 8 nodes."

Also discussed in blogs and comments here and here.

For us, this is a very relevant and restrictive maximum. We are currently running a cluster of 10 IBM x3850, averaging right around 40 VMs per host. HA is a necessity, particularly since we got our first list use out of it just the other week, with excellent results. This has big implications for us, as it is changing my mind on upgrade timelines as well as hardware design, making us consider building a new cluster with blades instead of upgrading our current cluster.

I have two questions, perhaps better suited for VMware support, but I'll throw it out there:

1. Is this a soft limit, like a recommendation, where we'd probably be ok to hover around the 40-45 VMs/host limit?

2. Is there any chance this is a typo, or is it likely to change in the relatively near future? It seems like such an odd imposition.

Reply
0 Kudos
8 Replies
HyperViZor
Enthusiast
Enthusiast

Unfortunately its true. There is no mistakes here. Duncan Epping over at Yellow-Bricks.com elaborated on that, make sure also to read the comments on the post:

UPDATE: sorry, i didn't notice that you already saw Duncan's blog post and linked to it Smiley Happy .. my bad...

Hany Michael

HyperViZor.com | The Deep Core of The Phenomena

Hany Michael
HyperViZor.com | The Deep Core of The Phenomena
Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

Unfortunately it's a hard limit due to HA architecture.

I suppose you can add some memory and get 8100 HA cluster instead of 1040.

It will change in the future for sure, hardware becomes more and more powerful, single blade server can contain 24 cores today, and this brings us to 2:1 vCPU consolidation ratio, which is pretty low.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
Reply
0 Kudos
RParker
Immortal
Immortal

For us, this is a very relevant and restrictive maximum. We are currently running a cluster of 10 IBM x3850

Then you designed your cluster / network incorrectly. It's not restrictive, it's precaution. The cluster is designed for failover, and with 40 VM's on a cluster, those have to roll over to some host. You know you don't have to use HA... with FT available HA ALSO becomes less of an issue, so you are still thinking ESX 3.5, look at the features for 4.0 and adjust your cluster accordingly, and you will see you will still win.

Besides, it's generally not a good idea to have more than 8 hosts in a cluster anyway, which is best practice, I notice you didn't post the BEST practices, but instead you point out what you perceive as a deficiency, which is odd, because you should look at ALL the factors not just what is face value.

Reply
0 Kudos
jesse_gardner
Enthusiast
Enthusiast

I hope this doesn't come off as stand-off-ish. I've enjoyed and been helped by your feedback many times, RParker.

Then you designed your cluster / network incorrectly.

I disagree, I think we just size our hosts differently. I am able to absorb multiple host failures, and I don't see how the network is configured incorrectly. If you truly mean this statement, please ask specific questions about my environment, as I am curious what you mean.

It's not restrictive, it's precaution.

Precaution against not having enough failover capacity? If so, then why make it a VMs per host setting? Host sizing varies greatly, especially into the future. It should be per core, or even better, an intelligent check of available CPU and memory failover resources, similar to some warnings that already exist in the 3.x world. With 24/32 core hardware becoming available, and over 200gb RAM, this "precaution" makes that hardware essentially irrelevant in many/most server virtualization environments. A 2-node cluster of said hosts could easily absorb 40 VMs running on the other host were it not for this limit (let alone bigger, more realistic scenarios).

My gut feeling is that this isn't a precaution, I think they ran into a limitation with some legacy code regarding their HA protection algorithm, and we are stuck with it for now.

You know you don't have to use HA... with FT available HA ALSO becomes less of an issue, so you are still thinking ESX 3.5, look at the features for 4.0 and adjust your cluster accordingly, and you will see you will still win.

Well, HA is a big feature of VMware's, and one we are happy to pay for. It was put to great use when we lost two hosts just a few weeks ago. FT, with its current restrictions, isn't a big feature for us yet. It may be great for specific VMs in specific situations, but it won't offset HA to any degree. I'm guessing many shops feel the same way. I am happy with the features of 4.0, and am looking forward to implementing it, but I find this snag frustrating, as it does force me to change our design to one I feel is less optimal.

Besides, it's generally not a good idea to have more than 8 hosts in a cluster anyway, which is best practice, I notice you didn't post the BEST practices, but instead you point out what you perceive as a deficiency, which is odd, because you should look at ALL the factors not just what is face value.

Where is this "best practice" documented? Is it an up-to-date impression of said best practice?

In the end, I could summarize my frustration this way: If you find HA to be an important feature, like we do, this limitation really forces your architecture to be more complex, and potentially less efficient. Instead of having 10 big servers able to absorb one or two failures, I can choose to a) have 8 REALLY big servers and tie my hands from being able to scale out, forcing me to start new clusters, or b) have many more smaller servers, making sure my consolidation ratio is less than 40:1.

The more hosts in a cluster, the more the absorbtion of a failure can be spread about, therefore the higher utilization you can run on day-to-day. Either way I choose to deal with this, it limits my consolidation ratio.

Reply
0 Kudos
jesse_gardner
Enthusiast
Enthusiast

A 2-node cluster of said hosts could easily absorb 40 VMs running on the other host were it not for this limit (let alone bigger, more realistic scenarios).

Doh! Disregard this. At 1:00am, my wandering mind wandered away from the actual facts of the situation. A 2-node cluster could run 100 VMs on each host. I understand that even with this limitation, you could host a lot of VMs in 8-host clusters with this big new hardware. Points I make near the end of my post are still valid.

Reply
0 Kudos
AntonVZhbankov
Immortal
Immortal

As for now, HA can handle 100 VMs per host in 8-host cluster, and 40 VMs per host if there more than 8 host in cluster.

VMware will increase numbers in future without doubt, but they do not like to say when. Maybe it will be in 4u1, maybe later.

So, today you live with 40VMs per host in current configuration, or divide it to 2 clusters, 5 hosts each for ex.

Remember, you still have ability to VMotion VMs manually between clusters.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
jesse_gardner
Enthusiast
Enthusiast

Does anyone know if DRS would be smart enough to keep <40 VMs on a host? Say our average was 30 VM/s per host, but there were a couple extremely high utilization VMs, forcing DRS to unbalance the load, numerically...

This limit also could limit the effectiveness of DRS, and/or make HA functional and unfunctional throughout any given day..

Reply
0 Kudos
krc1517
Enthusiast
Enthusiast

I can verify that DRS will not keep VMs to <40. We found out the hardway.

91vms on 1esx host....Crash. Split Brain all over the place... Thanks DRS.

Anyone

get any updated information on when 40 VMs or 8 hosts will be a

recommended config? Right now we run bl685c's w/128GB of RAM.

Average

host has ~40 VMs on it but can support much more in terms of pure cpu /

memory consumption. Won't run at peak performance but will run while we

fix the errors.

Reply
0 Kudos