VMware Cloud Community
Iwan_Rahabok
VMware Employee
VMware Employee
Jump to solution

cpu.usage.average

Hi,

I'm trying to get the _maximum_ CPU utilisation for a given VM. Kindly see the chart.

Using PowerCLI/SDK, I can't seem to get the maximum value.

The get-stat gives me a list of individual value.

get-stat -Entity R0_vCloud_DB -stat cpu.usage.average

I can't seem to get the maximum, minimum, etc.

To get the maximum, I have to calculate it

get-stat -Entity R0_vCloud_DB -stat cpu.usage.average | Measure-Object -Property Value -Maximum

Does it mean that vSphere Client also calculate it on the fly? It does make sense, but I never thought about it that way until I had to do it programmatically.

Thanks from Singapore.

e1

e1
1 Solution

Accepted Solutions
LucD
Leadership
Leadership
Jump to solution

It's a bit more complicated then "on the fly".

The maximum and minimum values are calculated during the aggregation step for each interval.

And the maximum and minimum metrics are only available for the intervals that are at level 4.

An example, the ESX(i) server has a cpu.usage.average metric.

This is in the Realtime interval and that interval is 20 seonds long.

When the vCenter collects these values it will aggregate the 20-second intervals into Historical Interval 1 values.

These have a 5-minute interval.

If Historical Interval 1 is configured to have level 4 metrics, the aggregation job will calculate the maximum and minimum values.

So the aggregation job takes 15 realtime values, calculates the average value and assigns respectively the lowest and the highest of the realtime values to the minumum and maximum metric for the 5-minute interval.

See my PowerCLI & vSphere statistics – Part 1 – The basics post for more info on intervals, aggregation jobs, levels...

A short example:

the following script takes 15 realtime values for cpu.usage.average from an ESX(i) server

the Measure-Object cmdlet emulates the aggregator job that takes these 15 values and converts them into Historical Level 1 values (average,minimum and maximum)

$esxName = "MyEsx" 
$esx = Get-VMHost -Name $esxName
$stats
= Get-Stat -Entity $esx -Stat cpu.usage.average -Realtime -MaxSamples 15 -Instance ""
$stats | Measure-Object -Property Value -Average -Minimum -Maximum

Note the -Instance parameter on the Get-Stat cmdlet, this tells the Get-Stat cmdlet to take the aggregated value over all CPUs that are present on the ESX(i) host.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

View solution in original post

Reply
0 Kudos
6 Replies
RvdNieuwendijk
Leadership
Leadership
Jump to solution

Hi Iwan,

I think the vCenter client calculates the value on the fly.

Your script has a bug. It calculates the maximum of the average values. I think it should be:

get-stat -Entity R0_vCloud_DB -stat cpu.usage.maximum | Measure-Object -Property Value -Maximum


Regards, Robert

Blog: https://rvdnieuwendijk.com/ | Twitter: @rvdnieuwendijk | Author of: https://www.packtpub.com/virtualization-and-cloud/learning-powercli-second-edition
LucD
Leadership
Leadership
Jump to solution

It's a bit more complicated then "on the fly".

The maximum and minimum values are calculated during the aggregation step for each interval.

And the maximum and minimum metrics are only available for the intervals that are at level 4.

An example, the ESX(i) server has a cpu.usage.average metric.

This is in the Realtime interval and that interval is 20 seonds long.

When the vCenter collects these values it will aggregate the 20-second intervals into Historical Interval 1 values.

These have a 5-minute interval.

If Historical Interval 1 is configured to have level 4 metrics, the aggregation job will calculate the maximum and minimum values.

So the aggregation job takes 15 realtime values, calculates the average value and assigns respectively the lowest and the highest of the realtime values to the minumum and maximum metric for the 5-minute interval.

See my PowerCLI & vSphere statistics – Part 1 – The basics post for more info on intervals, aggregation jobs, levels...

A short example:

the following script takes 15 realtime values for cpu.usage.average from an ESX(i) server

the Measure-Object cmdlet emulates the aggregator job that takes these 15 values and converts them into Historical Level 1 values (average,minimum and maximum)

$esxName = "MyEsx" 
$esx = Get-VMHost -Name $esxName
$stats
= Get-Stat -Entity $esx -Stat cpu.usage.average -Realtime -MaxSamples 15 -Instance ""
$stats | Measure-Object -Property Value -Average -Minimum -Maximum

Note the -Instance parameter on the Get-Stat cmdlet, this tells the Get-Stat cmdlet to take the aggregated value over all CPUs that are present on the ESX(i) host.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos
Iwan_Rahabok
VMware Employee
VMware Employee
Jump to solution

Hi Robert,

Thanks for the reply. What I found is get-stat does not take the realtime statistics. In fact, it's taking the "past year" data, which has "sampling periods secs" = 86400.

Since vCenter 4.1 does not keep the absolute Max and Min as it rolls up, we lose that data in the "past year" statistic.

PowerCLI give the error the metric counter cpu.usage.max does not exist. I also tried cpu.usage.maximum. So apparently even the metric is gone too 🙂 I thought it would return 0 or tell me it does not exist.

Cheers!

e1

e1
Reply
0 Kudos
Iwan_Rahabok
VMware Employee
VMware Employee
Jump to solution

Thank you. Wonderful answer. Wish there was a way to give 10x more points.

e1

e1
Reply
0 Kudos
jwozvmguy
Contributor
Contributor
Jump to solution

Thanks for the info!

LucD wrote:

Note the -Instance parameter on the Get-Stat cmdlet, this tells the Get-Stat cmdlet to take the aggregated value over all CPUs that are present on the ESX(i) host.

The -Instance "" parameter made all the difference! I couldn't figure out what was causing the cpu.usage to not populate!

If anyone's interested, here's where I've used this. The following spits out CSV files containing realtime stats for each host from your connected PowerCLI session (you need to do Connect-VIServer <host or vCenter> first ).  We will be using this to import the realtime stats into our 3rd-party enterprise monitoring / cap planning tool.

# Arrary of metrics we're going to collect.
$metrics = "cpu.usagemhz.average","cpu.usage.average","mem.consumed.average","mem.usage.average","net.usage.average","disk.maxTotalLatency.latest","disk.usage.average","mem.swapinRate.average","mem.swapoutRate.average"
$allHosts = Get-VMHost
$start = (Get-Date).AddMinutes(-60) # Last X minutes

# -------------------------------------------------------------------
# BEGIN MAIN SCRIPT BODY
# -------------------------------------------------------------------
write-host "Running vSphere Stats Collector for all hosts in your connected session."
write-host "Total Hosts: "($allHosts | Measure-Object).count

# In the following line begining with "$report =", note the following:
# use [-Realtime] to collect the (near) realtime interval stats (every 20 seconds is the most granular it will get)
# Use [-intervalmins 5] to collect the 5-minute averaged interval stats
# Remember to use [-Instance ""], to allow selection of CPU % averaged over all CPU instances.

ForEach ($omgHost in $allHosts){
$report = Get-Stat -Entity $omgHost -Stat $metrics -Realtime -Start $start -Instance "" | Group-Object -Property EntityId,Timestamp | %{
   New-Object PSObject -Property @{
  HostName = $_.Group[0].Entity.Name
  DateTime = $_.Group[0].Timestamp

  CpuMhzAvg = ($_.Group | where {$_.MetricId -eq "cpu.usagemhz.average"}).Value
  CpuAvgPct = ($_.Group | where {$_.MetricId -eq "cpu.usage.average"}).Value
  MemConsumedAvgKb = ($_.Group | where {$_.MetricId -eq "mem.consumed.average"}).Value
  MemUseAvgPct = ($_.Group | where {$_.MetricId -eq "mem.usage.average"}).Value
  MemTotalMB = $omgHost.MemoryTotalMB
  NetUseAvgKBps = ($_.Group | where {$_.MetricId -eq "net.usage.average"}).Value
  DiskMaxLatencyMS = ($_.Group | where {$_.MetricId -eq "disk.maxTotalLatency.latest"}).Value
  DiskUsageAvgKBps = ($_.Group | where {$_.MetricId -eq "disk.usage.average"}).Value
  MemSwapInKBps = ($_.Group | where {$_.MetricId -eq "mem.swapinRate.average"}).Value
  MemSwapOutKBps = ($_.Group | where {$_.MetricId -eq "mem.swapoutRate.average"}).Value
   }
}

$fileName = $omgHost.Name + "_RealTime_Stats.csv"
write-host "Outputing file: " $filename
$report | select-object -property HostName,DateTime,CpuMhzAvg,CpuAvgPct,MemConsumedAvgKb,MemUseAvgPct,MemTotalMB,NetUseAvgKBps,DiskMaxLatencyMS,DiskUsageAvgKBps,MemSwapInKBps,MemSwapOutKBps | Export-Csv $fileName -NoTypeInformation -UseCulture
}
write-host "Script Complete"


There's probably a better way of doing this in PowerCLI but this seems to work.

LucD
Leadership
Leadership
Jump to solution

Nice script.

I think you used most of the best practices (use 1 Get-Stat call with all the metrics, use Group-Object to analyse the results per VM)


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos