I am running vCOps 5.8.2 at present, and primarily use vCOps at this time for capacity management. A lot of time was spent tuning the configuration policies, including an engagement with VMware. In any event, as I have been going through sites and trying to clean up storage capacity issues, either by allocating more space, planning to order more disk, or converting thin provisioned disks to thick, I have noticed that the over-commitment numbers produced by vCOps seem to be inaccurate. Below is an example. Perhaps I am misinterpreting what this metric means. if not, it seems way off.
For reference, the policy is set up for usable capacity, 20% buffer on disk space, and 0% over-commitment on disk space.
If each VM is 527GB on average, and there are 111 VM’s, that’s roughly 58TB of allocated space. 58TB/43TB = 134%. We validated that the VM count and the total space numbers being reported in vCOps are accurate. This being the case, I am confused about where and how vCOps is calculating the 458%. We have the policy set to have a 20% buffer, and 0% Allocation Overcommit Ratio. Is the vCOps algorithm flawed when we get into negative numbers (i.e. overallocation), or are we somehow misreading these numbers? To me, 458% overallocated means you need to go add 4.5x as much storage as you already have to get back to even, when in reality it looks as though 25 – 30TB will do the trick, rather than the 180 or so the percentage would imply.
Do you use RDMs? Attach a vC Ops report so that we can see the config of your policy.
We are running into the same type of results. I'm curious if we are interpreting this number incorrectly as well.
We do not use RDM's
I see that vRealize is using the counter capacity.contention to calculate the Overcommitment numbers, but the numbers seem way off and I can't find any details on that counter.