It seems even though a policy is configured with Allocation only the capacity remaining analysis page still uses analytics to calculate the used amount. I thought allocation was based purely on the number of vcpu's that are powered on - for example when the CPU Allocation option is set in the policy. However it seems "average demand" is what it considers as used and not the actual powered on and running vcpu's in virtual machines. The result of this is that I am seeing capacity remaining for a cluster, configured with a 1:1 cpu overcommit policy, still showing a number of remaining vcpu's even though I have provisioned more vCPU's than the number of cores in all the hosts. This seems like it isn't purely allocation and some what a hybrid configuration between allocation and demand. The total and usable amount of capacity is calculated correctly using the overcommit ratio, but the amount used is based on demand and hence called Average Demand. Can someone shed some light on this? Is there anyway to strictly use the number of powered on vCPU's as the used amount?
There is Demand|Average Demand and Allocation|Average Demand. It's just a naming thing, and applies to the metric group name/classification. You see Allocation|Average Demand.
Think of that as looking on a map and seeing "Springfield", and questioning why every state has one. It's not the same town, nor the same county. The same people don't live in every location. It's just a matter of somebody, somewhere, deciding they wanted to reuse the terminology even though "Demand" is also a type of capacity model.
Two screenshots would be of benefit in this situation - of the applied policy>analysis settings><object type you're looking at>>capacity remaining section. And a screenshot of the analysis>capacity remaining tab, with all tabs expanded.