One of my peers suggested that we re-consider the number of CPUs for each VM in our environment.
The majority of the VMs have 2. Some have 1 and some have 4.
He had heard it was a VMware best practice to have 1.
Is there something to this? Or is the answer it depends on the workload of a given VM.
His concern is that we are over allocating CPU resources that the VMs don't need.
Appreciating any inputs, comments, etc. about this topic.
As a best practice use as less as possible, but as many as needed. Unless you are sure you need more than 1 vCPU (load requirements, vendor recommendations, ...), start with a single vCPU and increase the vCPU count as required,
This is an excerpt from http://www.vmware.com/pdf/vi_performance_tuning.pdf Refers to the VI3 but still valid.
VMware recommends the following practices and configurations for optimal CPU performance:
When configuring virtualmachines, remember that ESX Server itself has some overhead. Allow for the CPU overhead required by virtualization, and take care not to excessively overcommit processor resources (in terms of CPU utilizations and the total number of VCPUs).
Use as few virtual CPUs (VCPUs) as possible. Do not use virtualSMP if your application is single threaded and does not benefit from the additional VCPUs, for example.
Having virtual machines configured with virtual CPUs that are not used still imposes resource requirements on the ESX Server. In some guest operating systems, the unused virtual CPU still consumes timer interrupts and executes the idle loop of the guest operating system which translates to real CPU consumption from the point of view of the ESX Server. See “Related Publications” on page 22, KB articles 1077 and 1730.
In ESX we try to co-schedule the multiple VCPUs of an SMP virtual machine. That is, we try to run them together in parallel as much as possible. Having unused VCPUs imposes scheduling constraints on the VCPU that is actually being used and can degrade its performance.
Consider this extreme example - which isn't far off what I have seen at some implementations.
I have a host with one quad core CPU. I have four VMs. Each VM has four vCPU.
When a single VM wants to do something, it needs to lock all four cores, and schedule a timeslice where no other VM has any processing capacity.
After changing each host to one vCPU, each host has 100% time share on one core. This can actually result in significant performance improvements.
It's rare that every machine actually uses several cores. By all means, isolate the machines that do and give them what they need.
As mentioned by some of these other smart individuals, the ratio of virtual CPUs (vCPUs) to physical CPUs (pCPUs) can also be a source of performance degredation, particularly when the CPU scheduler on the host is unable to always give VMs CPU time whenever they ask. As a general rule of thumb - although not a perfect science - is that a 3:1 ratio (vCPU:pCPU) is typically safe. The way to know for sure if you're encountering performance degredation because of your vCPU:pCPU ratio would be to look at the CPU Ready metric for some of the VMs on a given host (or a VM that you suspect has a performance issue).