I have a Cluster with 3 Hosts (3x 256GB memory). So far, so good. 😉
I can check the consumed memory on the Cluster -> Hosts.
On the first look: vSphere HA will be have issue to do a HA-Failover. Consumed Memory are to High.
If I check the consumed memory with esxtop, we can see that vmkernel (vSAN) consume 48GB memory:
If now 1 Host will be fail, we need only 137GB memory to failover, exclude that 48GB memory which is included in the first Screenshot. HA-Failover will be possible.
Is "Admission Control" smart to check that?
Is it possible to modify this Hosts-View or is there a other View to check the consumed memory without vSAN?
Or is the only way to check the consumed memory in "Advanced Performance" and subtract the vSAN-Memory from the Host-Memory?
I am aware that the Standard Hosts View are usefull and important cause ESXi, vSAN need memory parallel to Virtual Machines.
Thanks and regards,
First and foremost, Admission Control looks at "reserved memory", not at "consumed" or "used". Secondly, all reserved resources will be considered by Admission Control, as resources can only be reserved once. meaning, when something (a VM, or a system process) has reserved 20GB of memory, than nothing else can claim those 20GBs.
Another thing to point out here is that "consumed" memory doesn't always mean that the memory is actively used. It could for instance be that you have VMs running which have accessed all memory pages at boot, this makes it seem as if many memory pages are used, while in reality those pages are unused. So the "consumed memory" graphs can also be somewhat misleading.
Either way, as long as you have Admission Control enabled, and HA is not providing a warning that you ran out of resources than there's nothing to worry about when it comes to restarting the VMs! If you want to guarantee a level of performance, make sure to look at the option "performance degradation to tolerate" in the vSphere HA section, as that will allow you to specify if you are willing to have less performance after a failure or not.