VMware Cloud Community
Iamjedi
Contributor
Contributor

Please recommend, How many host ESXi in a cluster we should be have?

Dear All,

Please help suggestion me.

1. How many host ESXi in a cluster we should be have?

   - We create new cluster on vCenter, how many host in cluster we should have?

   - Have limit Host ESXi in a cluster?

I'm concern about performance, complicated to control when Host ESXi failed if we have a lot of Host ESXi in a cluser.

Thank you and Best Regards,

Jirakorn I.

5 Replies
a_p_
Leadership
Leadership

A single cluster can manage up to 64 hosts (see https://configmax.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%206.7&categories=2-0).

The number of hosts to have in a cluster, depends on your needs. How many hosts do you plan to have?

André

NathanosBlightc
Commander
Commander

Number of hosts that are part of a VMware Clustering depends on your vSphere version maximum capabilities, for example in vSphere 6.7 it's 64 host nodes per each cluster.

But to calculate required host you need to estimate current load + required failover load + Fault Tolerance Load, based on your needs join required ESXi hosts to the cluster and then based on previously mentioned factors, now you can configure HA Admission Control: Specify exact hosts for failover or reserve sufficient physical resources (CPU, RAM)

Please mark my comment as the Correct Answer if this solution resolved your problem
Iamjedi
Contributor
Contributor

I think that about 10 or 12 Hosts in a cluster. What do you think about it suitable or not? Could you provide me the details?

0 Kudos
NathanosBlightc
Commander
Commander

How many physical resources (CPU, RAM ) these physical servers have?

And How much workload you need for running the virtual machines?

It highly depends on the answer to previous questions ... you cannot decide just based on counts of existing ESXi host on the cluster.

Please mark my comment as the Correct Answer if this solution resolved your problem
0 Kudos
Tibmeister
Expert
Expert

Under 6.7 it is 64 hosts per cluster, which I have ran slightly higher than that (~75) before splitting the cluster in two.  With the different DRS capabilities and the tie-in with vROps, VMs can move between clusters without any issue.  From a management standpoint, vDS and Datastores/Datastore Clusters are a Datacenter level object, not a cluster level, so having multiple clusters in the same datacenter means you have have the same vDS and Datastores/Datastore Clusters across all the hosts for ease of management.

The link previously provided (https://configmax.vmware.com/guest?vmwareproduct=vSphere&release=vSphere%206.7&categories=2-0) is always the best source for these since they can change even with updates.  Also, you must consider other maxims, VMs per host, Datastores per host, vDS ports per host, etc.  It's often easier to create a table and lay out your expectations that way, I like to use a white board personally.

Overall, having multiple clusters in a datacenter, maybe doing the same functions and maybe not, are totally a design decision and has no impact on operations.  Now one thing to think about though, is if you wish to maintain N+1 on hardware, this will be at the cluster level, not datacenter level, so each cluster has it's own N+1.

0 Kudos