The vCPU waste originates from your oversized VM settings in the applied config policy for each VM resource type.
The formulas is a bit involved, but loosely the idea is this:
You have # vCPU provisioned to a VM.
The "Planning" attrib category calc's cpu usage % periodically.. ~1x per day.. this intelligence come from your config policy / oversized VMs settings as well as some of the other cap planning model settings.
You've got your provisioned vCPU... subtract the # cores that aren't being utilized (driven largely by cpu demand)... then you've got your # cores remaining that is deemed necessary as per the planning calc's. Those # cores that aren't calc'd to be necessary are waste.
It is important to adjust your config policy settings, as these are critical to the accuracy of the cap planning features (including the efficiency badges).
I have a question about this calculation. I have modified the number of vCPUs of a VM from 8 to 2 (27 november). Today, if I report Oversized VMs of my farm, this VM appears with 8 vCPUs and 6 vCPUs waste. Why?? If I reduce the time period under Manage Display Settings to 10 (under Non Trend View) the report say me that I have 2vCPUs (correct) and 1 vCPU waste.
I think this is a bug because when I run Oversized reports I don't want to check if the actual number of vCPUs (or the current configuration of memory) is correct or no.