5 Replies Latest reply on Nov 29, 2006 11:19 AM by dpomeroy

    Single vs Multiple Virtual Processors

    blarew Novice

      Hi All.  I'm interested to know what the benefit of running a single vs multiple virtual processors.  I may misunderstand, but let's say I have a 4 processor server.  If I only have 1 virtual processor vm, doesn't ESX spread computing tasks among the 4 physical processors under the hood?  And if I have a 2 virtual processor vm doesn't that vm have to wait for 2 physical processors to be available before it perform computing tasks?    I'm wondering if it's better to move multiple virtual processors back to a single processor so that load is spread more evenly over my processors with higher response times.  Maybe you could tell me if I'm looking at this wrong or what the benefit of multiple virtual processors are?  Thanks.

        • 1. Re: Single vs Multiple Virtual Processors
          mreferre Virtuoso

          Well, basically you are "looking at this wrong".

           

          1 vCPU means you can use 1 physical core.

          2 vCPU means you can use 2 physical cores.

          Etc etc etc

           

          ESX DOES NOT spread the 1vCPU evenly across all the physical processors. Well the single 1vCPU could run on different physical processors but always on 1 pCPU at any point in time.

           

          This is the basic. There are however potential issues in running 2 or 4 vCPU's configurations under some circumstances ........ but I am not getting into that because there are some 1500/2000 posts that discuss this.

           

          I suggest you have a look at the most recent posts on the topic.

           

          Massimo.

          • 2. Re: Single vs Multiple Virtual Processors
            dpomeroy Virtuoso

            Massimo is correct, to expand a little the ESX scheduler runs virtual CPUs on physical CPUs (or cores in multicore, or "logical" CPUs on a HT system).

             

            At any given time only one virtual CPU is running on one physical CPU. So on a server with 4 physical CPUs at given time only 4 virtual CPUs can be running. If you have a VM with 2 vCPUs every time it runs it needs to be on 2 physical CPUs. So, yes, if you have a 2 vCPU VM it needs 2 physical CPUs to run on. And no, ESX doesn't spread the VMs 1 vCPU over 4 physical CPUs. The scheduler will move what physical CPU the VM is running on (unless you tie it to a physical CPU via affinity rules), so if you check at time A it could be running on CPU1 and if you checked latter it could be running on CPU2.

             

            The benefit of multiple virtual CPUs is basically the same in the virtual world as physical, it allows you to run higher CPU intensive workloads that can take advantage of SMP.

             

            The rule of thumb most of us recommend it try to always start with 1 vCPU and then add the second one later if in fact it is needed. It is much easier to go up from one than to try to go back down.

            • 3. Re: Single vs Multiple Virtual Processors
              simon.l Expert

              Our experience of dual processor VM's on four way servers has been interesting.  Our four way server proof of concept ESX server started with 10 single processor VM's.  We added a single SMP processor and it ran without issue, no "ready times".  Upon adding the 2nd and 3rd the ready times went through the roof reaching 20 to 30%.  We only reduced this by turning off 5 single processor VM's.

               

              Kind regards

               

              Si

              • 4. Re: Single vs Multiple Virtual Processors
                Pisapatis Enthusiast

                If 1 vCPU maps to 1 physical CPU/core, what happens when we run 12 VMs in a 4-socket (dual-core) ESX configuration? Is there a garenteed percentage of CPU/core allocation for each VM? Or it is queued to the Hosting supervisor one at a time?

                • 5. Re: Single vs Multiple Virtual Processors
                  dpomeroy Virtuoso

                  That is where the scheduler comes in, a vCPU doesn't run 100% of the time, the scheduler decides what vCPUs run on what pCPUs and for how long. It has to take into consideration resource settings we control such as min/max/reservations/shares etc. when doing this of course.

                   

                  VMware just released a white paper on Ready Time that does have some good information about how the scheduler works, you can check it out here: url=http://www.vmware.com/vmtn/resources/641 Ready Time Observations /url