As far as I know each virtual CPU maps to a physical one. My experience shows that your readings about
... as it can cause slowdowns because the VM must wait for 2 CPUs to be free.
are right. I have a 1 cpu dual core server, running a virtualized Oracle 11g Cluster. When I give each VM two virtual cpu's (for better total usage of all cpu cores) the performance is much worse than only giving each VM one cpu. I never had an idea why it is that way but it looks like your point hit the spot.
Thread moved to Server 2 beta:Performance and Scalability forum.
With many new servers now coming out with mutiple cores, we are winding up with servers that have upwards of 8 cores in a server. I've read that one should not set up a VM with multiple CPUs, as it can cause slowdowns because the VM must wait for 2 CPUs to be free. If one were to configure a VM with a single CPU, does that use only one physical CPU? Or is there some sort of multithreading that allows the single virtual CPU to take advantage of the multiple cores in the system? Is there anything else to watch out for when setting up vmware server on a system that has many cores?
A single vCPU virtual machine will only use one CPU. A 8 core machine could be simultaneously be running 7 VM's no problem leaving the 8th core free for the host's processes, for example the scheduler.
That last bit is what is important when deciding how many processors to assign to a VM. The host's scheduler is what is used to decide who runs what when.
Lets say you have a dual core machine. You run one VM. If you create that VM with one vCPU, the host only has to pull off a running process off one core to run the VM. If you create it with 2 vCPU's, then the host has to pull off the processes on both cores to run that VM as both cores need to be made available to the VM at the same time. This means just for the host to do anything, it has to pull the VM off the processor. This is where a performance problem occurs. However, if your host had a quad core machine, a 2vCPU VM does not cause the same problem.
On your 8 core machine, you could simultaneously run 4 2vCPU VM's. One of them is going to be scheduled off when the host needs to be doing something, but really that is going to happen eventually. However, that still may not be optimal because 2 cores must be used by each running VM.
In general always create VM's so that the number of vCPU's is less then the number of real CPUs/cores. So on a dual core, in general don't create a 2 vCPU VM. The more real CPUs/Cores you have, the safer for everyones performance it is to create 2 vCPU VM's, though you might still find that single vCPU VMs are still the optimal case. So the basic rule could be stated along the lines, always create single vCPU VMs unless you have special reasons not to.
Now, we all know that hard and fast rules are rarely always right. If a VM is doing something that genuinely can benefit from 2 processors AND has low IO needs, you might see performance increases with creating a 2 vCPU VM on a Dual Core machine, but these cases are outside the norm.
1 person found this helpful
First, Server 2.0 has significant improvements in 2VPU support (enough for us to say "full support" instead of "experimental support"), so it is worth re-experimenting with the different version.
SMB's advice is pretty much what I'd say. Don't have more VCPUs than your host has physical processors. UP VMs will degrade gracefully when overcommitted; SMP VMs will degrade more rapidly. To avoid degradation in SMP VMs, use ESX.
An SMP VM's efficiency is workload-dependent, but works best when both VCPU threads are running simultaneously. (Otherwise, we start burning a lot of CPU time trying to rendezvous between the threads). A loosely-coupled workload (each guest VCPU does independent things, e.g. kernel compile) tends to scale just fine; a tightly-coupled workload (both guest VCPUs share locks, e.g. high-performance database) is extremely sensitive to this co-scheduling requirement. In reality, you just have to try the workload under load to find out.
Wow, that's some great information SMB, and it makes a lot of sense. It sounds like with these new multi-core systems coming out, making a dual CPU VM isn't such a bad thing anymore.
Mainly I'm looking at this because we have some of these big servers, and I'm considering moving Oracle into a VM. I wouldn't want to hamstring Oracle with 1 CPU, since the point of all those CPUs is to get more performance from Oracle. It sounds like I'd still be better off running Oracle on real hardware for now, so it can use all of the CPUs. At least on our backup machine where I can afford to strip oracle down to 2 CPUs, I'll be able to VM all of it.
Have you considered using Oracles own virtualisation product (based on xen)? They will support the database on their own product. Unfortunatly they've deleted the prebuilt RAC VMs they had for vmware.