0 Replies Latest reply on Sep 13, 2018 8:15 AM by QN42

    Detecting CPU Starvation from usage

    QN42 Lurker

      I've looked through some articles but can't seem to find (or maybe understand) an explanation of the following....

       

      Assume that you have a 10 core (100MHz per core) host with 10 VMs running, assume total of 1000MHz on the host, each VM has 1vcpu.  No cpu reservation.

      Now assume that all VMS are consuming a constant 50MHz.

       

      If I look at the cpu consumption of the host, I would expect to see it at 50% usage, using 500MHz, and the cpu ready value would be relatively small, if not zero.

      At the same time, the individual VM cpu consumption would show 50% usage, using 50MHz.

       

      Now assume that one (say VM #1) starts to constantly use/need 1000MHz.  The host cpu would jump to 100%, but what would the individual VMs report?  In an instantaneous point, I would expect that the VM #1 would report 100% cpu and the remaining would report 50%, as the VMs would be scheduled as best they could.  The cpu ready value would increase since there is more work to perform that can be carried out by the host.

       

      Now what if VMs #1-9 all use 1000MHz?  Will the cpu performance of VM #10 show that it's using 50% cpu?  Will it still get it's 50Mhz allotment at a consistent interval?

       

      What I'm really getting at is what would the cpu performance of a "well behaving" VM look like in terms of it's performance usage when the rest of the VMs are hogging the cpu?  Will it still show that it's using 50% cpu even though it may be running slower (i.e. it may have only been allocated 30MHz worth of processing in an interval), or will the cpu value be adjusted to show the actual performance over that period?