I've read all about HA admission control on yellow-bricks and lots of other sites but still have trouble pinning a number on how many guests we can add to our clusters without violating HA.
We are using Host failures cluster tolerates as the admission control policy which calculates failover based on slot size.
As an example here's an image of a cluster with two hosts and no cpu or memory reservations.
Available slots is 198 but I can't believe we could add that many vm's to this cluster. What is the best way to plan how many "average" guests we can add?
198 at the obviously not feasible 32mhz and 2300mb?
Without CPU/Memory reservation the slot size will be calculated based on the CPU/Memory overhead.in that case admission control will not work as you expect. so the better way, as far as I know, anaylyse the vROPS CPU/memory metric for each of the VM and derive the average CPU/Memory usage and manually configure the slot size.
Without CPU/Memory reservation the slot size will be calculated based on the CPU/Memory overhead.in that case admission control will not work as you expect. so the better way, as far as I know, anaylyse the vROPS CPU/memory metric for each of the VM and derive the average CPU/Memory usage and manually configure the slot size.
This isn't recommended approach to enable this policy specially on two node cluster unless you re confident with slot size calculation. This could work in smaller environment but as log as you have it small. Before setting custom slot size, as mentioned above analyze average CPU and RAM metric with reliable monitoring tool.
That is extremely helpful. Thanks. Our larger cluster is 6 hosts. We have 2 servers with an 8GB memory reservation.
Available slots in this case is 26. Would it be advisable to also set slot size manually in this case?
Or-
In both cases would using percentage based admission control be the more preferred approach?
Even for percentage based admission control policy will work based on the CPU/Memory reservation.
Right. But when using percentages slot size is not used, correct?
I wanted to clarify the 2300MB memory slot size in my screen shot.
We do have some servers in this cluster with a 2GB memory reservation.
That's where the 2300MB number comes from.
Right. The highest amount of cpu and memory reservation plus memory overhead on any vm among all vms is taken to calculate slot size.
vijayrana968-
On a 2 node cluster is it recommended to use the percentage based approach?
Or did you mean that using the slot size method is okay, so long as the size is calculated accurately?
I would go with percentage based with 50% reserved for fail-over. Slot size is gonna work good when size is accurate correctly. After-all it depends on the workload and reservation you're gonna run in cluster.