1 person found this helpful
I'd only increase the number of vCPUs if the application inside the guest really takes advantge of SMP (MSSQL server is an example of that).
And as long as there isn't a performance problem with 1 vCPU I also don't see a reason to increase the number of vCPUs regardless if the applications takes advantage of SMP.
oreeh, I think my setup can take advantage of multiple CPU's. If not by virtue of runing SMP capable applications, then at least on the basis of running several server processes at the same time.
I guess my question boils down to this: does increasing the number of vCPU's exact a performance penalty for the VM? If so, any percentage idea? Is the benefit of having 2 vCPU's outweighed by the additional overhead VM has to deal with (managing 2 CPU's). I wont run multiple vCPU's if the host machine is only dual core, but quad core makes plenty of extra resources available. Any thoughts on this "theory"?
This really depends on the guest and its utilization and I don't think that there's "one answer to all".
There definitely is a performance impact from the scheduler.
And I'm pretty sure that co-scheduling isn't available in Workstation which makes things even worse.
On the other hand: if the performace of the guest is bad and CPU utilization of the guest is maxed
out pretty often then there is a good chance that vSMP will improve things.
This however also depends on the other VMs running on the host!
Multiple VMs competing for 4 cores can be a problem, especially when using vSMP VMs.
With the hosted products (Workstation, Server, Fusion) to be safe I'd calculate only three cores
available in a quad core system, since the host OS (and the VMware processes) compete for the CPU too.
If you try it make sure to update the HAL, otherwise your results will be bad.
Message was edited by: oreeh
Any percentage only would be a guess ... if you want to get numbers I'd suggest we move this post to the performance forum.
When I posted my previous response, I hadn't seen your second post containing links to some very usefull documents. (thanks!)
One of those documents linked to an Oracle paper: http://www.vmware.com/pdf/Oracle_Scaling_in_ESX_Server.pdf.
This paper demonstrates that adding VM's actually increases the overall throughput and final outcome from a datacentre. One question in this regard: the paper demonstrates the benefits of adding additional VM's on the same host upto the point where host CPU is 100% utilized. I was wondering if adding additional vCPU would have the same or similar effect as adding an additional VM?
I guess we are both posting responses at the same time
Your statement ("And I'm pretty sure that co-scheduling isn't available in Workstation which makes things even worse.") got me thinking. Is it available on "server" level products from VMWare? I can change my host OS to Windows 2003 in no time (currently, it is XP). I did a quick look up of products from VMWare, and there are few server level products that sound promising. One is VMWare Server, that run atop a server OS (like WIndows 2003), and there is a ESXi server, which (I believe) runs using it's own built in OS. Would one of these be a good choice? As another wild thought: I am thinking of installing ESXi on my machaine and then running XP, 2003 etc in their own VM's on top of the ESXi. Any thoughts on this idea?
Be aware that this paper is based on ESX not Workstation (although the effect with Workstation is similar).
Adding more vCPUs to a VM will max out the host's CPUs but that is different to adding another single vCPU VM.
I'm still assuming co-scheduling isn't available in Workstation for the following:
When a vSMP VM runs it can only be scheduled if the appropriate number of physical cores is available.
When scheduling a single vCPU VM only one core needs to be available.
Assuming four cores, and four VMs:
with only UP VMs three (sometimes four) VMs can be scheduled at any time
with one vSMP and three UP VMs either three UP VMs can be scheduled at any time or one UP and one vSMP VM can be scheduled.
This clearly decreases overall performance but might increase the performance of the vSMP VM.
And if we assume six or eight VMs it even gets worse.
You probably won't notice a performance decrease of the UP VMs (unless of course they are heavily utilized) but you will notice a performance decrease of the vSMP VM.
If however you are only running one VM it won't matter much.
Co-scheduling is available in ESX 3.x and ESXi.
I don't think we'll ever see it in another VMware product (Server, Workstation, ...), regardless of the host OS,
as (from my understanding) there needs to be some kernel support to do this.
Depending on your needs and your environment ESXi could be worth a try.
Message was edited by: oreeh
Thanks for all the help.
I did quite a bit more extensive reading over the past couple of hours and am of the opinion now that UP vCPU VM is better than vSMP VM. However, there are cases were vSMP would help and I believe my set up falls in that category,
I am going to take advantage of this long weekend and try building an ESXi based work machine. Will report back with results/thoughts.