VMware Cloud Community
edgrigson
Enthusiast
Enthusiast
Jump to solution

Slot size and 'missing' host RAM

I've been checking the HA admission control configuration on our clusters and there's something bugging me. We're still using 'no of failover hosts' as our policy, hence slots are used. The slot size on our cluster is what I'd expect, but the number of slots isn't. We seem to be have less slots than I'd expect. For example;

We have five hosts in a cluster, each with 48GB RAM for a total of 240GB

A slot size is 2246MB of RAM (I'm ignoring CPU as it's not a constraint)

So I'd expect 240/2.246 = 107 slots (at least approximately). What I get is 90 slots.

Adding up the RAM allocated to those 90 slots comes to only 200GB - what's happened to my missing 40GB RAM? The VM virtualisation overhead is included in the slot size. Given that this is a five host cluster we're missing roughly 8GB per host (out of 48GB) - that's a hefty 16%. I thought Frank Denneman covered this previously (maybe it's covered in his latest book on HA/DRS?) but if so I can't find it now.

Can anyone shed some light on this?

Thanks,

Ed.

Reply
0 Kudos
1 Solution

Accepted Solutions
admin
Immortal
Immortal
Jump to solution

One thing worth pointing out is that computing the number of available slots in the cluster by dividing the total available memory by the slot size is a shortcut that will give slightly inaccurate results - in particular it will tend to give slightly more slots than HA will show. HA actually computes total slots by dividing each host's available memory by the memory slot, and then summing up the results. This may cause some resources on each host to be "wasted" if the available memory isn't a multiple of the memory slot. When computing with the aggregate memory resources, you may see slots that actually are from the combination of these "wasted resources" across hosts but since a vm can only run on 1 host at a time (discounting FT) these slots need to be discarded. Hope that made sense Smiley Happy Assuming your hosts are all the same, you can determine the available memory by dividing the aggregate memory by the number of hosts though that may not be completely accurate. You can't view the exact available memory of a host when it is in a cluster - VC only shows the aggregate resources (sum across all hosts). You can a connect the VI client to a host directly and go to its resource allocation tab to see the exact available memory.

Elisha

View solution in original post

Reply
0 Kudos
8 Replies
admin
Immortal
Immortal
Jump to solution

The missing resources are overhead from the vmkernel and management agents that run in the COS (vpxa, hostd, aam, etc) - so not all the physical resources of a host are available to run vms. If you click on the "Resource Allocation" tab of the cluster you'll see how much aggregate resources are available for vms (should be around 200 GB in your case).

Elisha

a_p_
Leadership
Leadership
Jump to solution

Take a look at http://www.yellow-bricks.com/vmware-high-availability-deepdiv (How does HA calculate how many slots are available per host?) this will explain how slots are calculated and why you see the 90 slots.

André

Reply
0 Kudos
edgrigson
Enthusiast
Enthusiast
Jump to solution

Thanks for the feedback, and yes it's definately a memory overhead -  I'm trying to quantify exactly what uses the 'missing' memory. Duncan's  thread (which I've read a few times over the years) simply refers to  slot sizes being calculated from 'available memory' so I'm trying to  understand what constitues 'available'. I set myself a hypothetical  VCAP-DCA question to determine the number of slots in a given cluster and I'm struggling to see how vCentre actually works it out.

On the cluster Summary tab, it's stated as 240GB. That's obviously the physical RAM in the hosts, not what's available to VMs.

On the cluster Resource Allocation tab, there are various figures (ESX 4.0U1);

  1. 221406MB total capacity
  2. 77712MB reserved capacity
  3. 7390MB overhead capacity
  4. 143694 available capacity

On an individual host's Configuration -> Memory tab;

  1. 49142MB total
  2. 3484MB system
  3. 44858MB virtual machines
  4. 800MB service console (all hosts are full fat ESX)

From the individual host figures I can see that roughly  4GB is taken for the vmKernel and SC combined, leaving 44GB per host.  That makes 220GB when multiplied by five for the cluster, which matches  the total capacity figure given under the Resouce Allocation tab. So  far, so good. BUT the slot size on this cluster is 2246MB, so surely I  should get 98 slots (221406/2246) but I actually get 95. If i subtract  the 7390 'overhead capacity' from the total that gives me the resultant  95 slots, so maybe that's the answer? Can anyone tell me what extra  overhead that third figure refers to? I suspect I'm getting too curious  and we all know what that did to the cat....

Regards,

Ed.

Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Would you mind uploading a screenshot of the resource allocation tab of the cluster? I'm not sure what the overhead capacity is that you're referring to.

Elisha

Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

One thing worth pointing out is that computing the number of available slots in the cluster by dividing the total available memory by the slot size is a shortcut that will give slightly inaccurate results - in particular it will tend to give slightly more slots than HA will show. HA actually computes total slots by dividing each host's available memory by the memory slot, and then summing up the results. This may cause some resources on each host to be "wasted" if the available memory isn't a multiple of the memory slot. When computing with the aggregate memory resources, you may see slots that actually are from the combination of these "wasted resources" across hosts but since a vm can only run on 1 host at a time (discounting FT) these slots need to be discarded. Hope that made sense Smiley Happy Assuming your hosts are all the same, you can determine the available memory by dividing the aggregate memory by the number of hosts though that may not be completely accurate. You can't view the exact available memory of a host when it is in a cluster - VC only shows the aggregate resources (sum across all hosts). You can a connect the VI client to a host directly and go to its resource allocation tab to see the exact available memory.

Elisha

Reply
0 Kudos
edgrigson
Enthusiast
Enthusiast
Jump to solution

Here's a screenshot of the overhead I mentioned. Interestingly this isn't included on my lab cluster although that's running on whitebox h/w and is a mixed ESX/ESXi cluster (also 4.1 rather than 4.0u1).

screenshot.jpg

Obviously the main overhead is the vmKernel, device drivers etc so I'm happy I understand how to calculate the number of slots. I'm sure this is all documented somewhere and I just need to spend some time correlating the various memory metrics so I know what each represents. I thought the Resouce Guide would include things like this but it's suprisingly sparse. Thanks again for all your help,

Regards,

Ed.

Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Check out the availability guide (www.vmware.com/pdf/vsphere4/r41/vsp_41_availability.pdf) for details on HA admission control and how slots are computed.

Elisha

Reply
0 Kudos
depping
Leadership
Leadership
Jump to solution

"overhead memory", it is the amount of resources reserved on a per VM level for virtualization overhead.... So each VM has a specific amount of overhead and that is all rolled up into this metric. At least that is my understanding,

Reply
0 Kudos