2 Replies Latest reply on Apr 17, 2019 8:14 AM by JingliangShang

    Pseudo counters 0x10001 and 0x10002 always the same?

    pclouds Lurker

      I have the following program to time an operation (which happens to be rdpmc). The program is run on x86_64 VM, in a resource group of two Linux VMs, the other one is always hungry for processing power.

       

      When the program is run in the idle Linux VM (unlimited CPU), two output values are always the same (ie. elapsed real time and apparent one are the same). Which is understandable, two VMs use different CPU cores.

       

      In unlimited mode, the resource group takes ~5 GHz. Now I limit CPU capacity of the group down to 1 GHz. This time two values increase, but still the same. I think when CPU is shared, real timer counter must go must faster than apparent counter, because one VM may sleep when the other one runs, not at the same speed. Am I missing something?

       

      #include <stdio.h>
      #include <stdint.h>

      inline uint64_t counter(int c)
      {
         uint32_t low, high;
         __asm__ __volatile__("rdpmc" : "=a" (low), "=d" (high) : "c" (c));
         return (uint64_t)high << 32 | (uint64_t)low;
      }
      int main(int argc, char** argv)
      {
         int i;
         uint64_t t1, t2, t3, t4;

         while (1)
         {
            t1 = counter(0x10001);
            t2 = counter(0x10002);
            for (i = 0; i < 1000000; i++)
               t3 = counter(0x10002);
            t4 = counter(0x10001);
            printf("%lu %lu\n", (t4 - t1)/1000000, (t3 - t2)/1000000);
         }
         return 0;
      }

        • 1. Re: Pseudo counters 0x10001 and 0x10002 always the same?
          TomaszSzreder Lurker

          Hi pclouds,

           

          I haven’t had a chance to test the code you posted, but I think I know why your experiment failed to demonstrate the timekeeping problem. Putting CPU pressure on a virtual machine indeed should make it fall behind real time (after a while – allow several hours to see the effects), but only given the guest operating system is using tick counting technique for timekeeping. This is true for Windows-family OSes and probably older Linux distributions as well.

           

          If you refer to the VMware timekeeping guide (http://www.vmware.com/files/pdf/Timekeeping-In-VirtualMachines.pdf) and read the “CPU pressure” section, you’ll read that using tickless timekeeping is a good solution in a CPU-hungry environment. Another section, “Timekeeping in Specific Operating Systems”, paragraph “Linux”, describes details of the “Clocksource” enhancement in newer Linux kernels as well as other improvements.

           

          1) I think it is very likely your Linux VMs do not noticeably suffer from the timekeeping problem due to their tickless nature – you should check their kernel revisions against information provided in the VMware guide. Can you perform the test using two Windows machines (say, Windows XPs)?

           

          2) In addition to that, I think that low-level time counters in any given operating system are usually prioritized and even under heavy load may perform fairly consistently. Maybe a 1 GHz processor is just enough for both of the machines to satisfy their needs of keeping time, and the missing CPU power is only affecting your computations (rather than guest OSes basic functions)?

           

          3) It seems that limiting CPU in a resource pool should already be enough to observe the problem, but you could also try to put CPU stress on the entire ESXi server and then see how the timekeeping works. Hardware timers on the host machine will never fall behind real time regardless of CPU pressure (unless they’re faulty). I’m not sure whether resource pool limits also affect basic communication between host and guests, so it could be worth checking out.

           

          4) Other possible reason for the fact you haven’t noticed anything is that it only becomes visible over a longer period of time – maybe the test should be run for 24 hrs or so (of constant CPU pressure).

          Hope it helped. I can’t remember my own tests I conducted more than a year ago, but I think I managed to see the problem (using Windows XP machines and putting CPU load on entire ESXi server for 24 hrs).

           

          Regards
          Tomasz Szreder
          Software Developer
          Compuware Gdańsk

          • 2. Re: Pseudo counters 0x10001 and 0x10002 always the same?
            JingliangShang Lurker
            VMware Employees

            Happen to see this very old question. Hope this answer might still be useful.

             

            The short answer is NO.

             

            The Long answer:

             

            0x10001 is used to get the real NS, and 0x10002 is used to get the apparent NS.

             

            Actually the values of the 2 pseudo PMCs will always be different. And the deltas will also be different. Because the values are in nanoseconds.

             

            However if divided by 1,000,000 the delta values (in milliseconds) will be the same most of the time. That's because the virtual devices keeps time very precisely to the real hardware. When the VM was not running on physical CPU, the virtual device still keeps apparent time accurate for the VM.

             

            Sometimes, especially when the host is very busy, we will see an obvious different between real elapse and apparent elapse, which is likely to be fixed shortly by virtual device, or vmtools, or guest OS.

             

            Below are  some samples of the 2 PMCs: first 2 samples shows approx same elapses between them; last 2 samples show a divergence in elapses;

             

            timestamp                             0x10001           0x10002

            2019-04-09T08:38:02.745Z  99732164188, 99720186713,

            2019-04-09T08:38:03.761Z  100747718391, 100735741702,

            2019-04-09T07:39:18.358Z  175354700225, 175342722301,

            2019-04-09T08:42:17.552Z  293721514241, 197848439824,

             

            Regards,

            Jing-liang