I've been trying to figure out how to measure CPU ready times and how to calcuate ready time percentages. There seem to be some conflicts in what has been posted - I'm hoping to straighten this out once and for all. So there are two sources of CPU ready time values - from ESXTOP there is %RDY and from the VI Client or vCenter there is CPU Ready in milliseconds. Now I have a few questions:
Is the %RDY value in ESXTOP calculated as (CPU Ready Time)/(CPU Used Time) or CPU Ready/Total time per interval, or is it something else?
There seems to be confusion on how to calculate %CPU Ready time from the VI Client/vCenter. Some postings indicate that you should compare CPU Ready Time to CPU Used time, others say to calculate %CPU Ready Time as (CPU Ready Time)/(Total time per sampling interval). For example if a one week interval is being analyzed, the sampling intervals are 30 minutes = 60*30 = 1800 seconds so if Ready Time for one sample was 30 seconds then that would lead to %Ready Time as 30/1800= 1.66%. But doesn't this only matter if Ready Time is a large percentage of CPU Used time?
When viewing CPU time in the VI Client or VCenter, I believe the following statistics should add up to the time per sample interval, so
Sampling interval = CPU Ready + CPU Used + CPU Wait + CPU System
So for realtime, the interval = 20 seconds
20 seconds = 20,000 ms = CPU Ready + CPU Used + CPU Wait + CPU System
For one of my VM's the averages in one interval are:
CPU Used = 362.35
CPU Ready = 12.117
CPU Wait = 19367
CPU System = 1.839
Adding these up gives 19743.306 which is pretty close to 20,000 - the sampling interval.
So given this - which formula should we use?
%CPU Ready = CPU Ready / Sampling interval
or
%CPU Ready = CPU Ready / CPU Used
Thanks,
Ed