VMware Cloud Community
ashleymilne
Enthusiast
Enthusiast

CPU's cores and overprovisioning

I know this question has been asked before in one form or another however I am looking for answers from those who administrate and use servers running ESXi not a document of best practices.

If I have a server with a single CPU that has  8 cores, best practices says that I can have eight virtual machines configured with one single core each, or four using two cores each, or some variant that uses all eight cores but no more.

What happens if I have four virtual machines configured with two cores each and I had another virtual machine configured with two more cores? I know its next to impossible to say how this will affect performance as different apps use CPU differently and depending on the workload of the server, number of users the VM's are serving etc.

In my scenario I have a single server with an 8 core CPU running vm's of Exchange 2010, Terminal server (2008R2), an application server (2008R2) all of these configured with 2 cores, then two vm's that are domain controllers each configured with a single core each. This is all serving a company of 50 or so people.

I am going to need to add to more VM's in the near future, one running Microsoft SQL and the other an application server, I suspect each of these will require two cores each. I do have a second server with a single quad core processor that I can move two VM's using two cores each to which I am likely going to do.

I just wanted to know what others are doing out there with their VM's and cores and how they are managing it and what if any performance degradation they are seeing in real world use.

3 Replies
ClintColding
Enthusiast
Enthusiast

For CPU's it isn't a 1 to 1 ratio for physical to virtual. I wouldn't hesitate to run 16 or even 32 VM's with single cores on a 8 core physical machine. The performance will depend on the type application workloads you have. CPU ready is a reliable metric to determine when or if you have truly overprovisioned your pCPU's. Check out this page for more info on ESXTOP metrics: http://www.yellow-bricks.com/esxtop/

0 Kudos
jrmunday
Commander
Commander

I'm afraid the answer here is ... "It depends". You need to understand the CPU workload (including any trends, for example month end processing) to determine the actual requirements.

The CPU scheduler is very efficient at managing the guest workloads, but you need to look at the performance metrics to help you determine if you need more or not. I would say review each guest VM and ensure that none are unnecessaritly overprovisioned (ie. only need on haut have two assigned), and then look at %RDY (ready time) to get an idea of any latency introduced by over provisioned CPU. Don't forget that we expect to see some ready time in a virtualised environment and user experience should be included in your analysis.

Hope this help!

Cheers,

Jon

vExpert 2014 - 2022 | VCP6-DCV | http://www.jonmunday.net | @JonMunday77
Alistar
Expert
Expert

ESXi's resource scheduler is pretty smart in terms of spreading the CPU workload. See these resources available not as individual CPU cores, but rather as their total frequency combined - this is what the ESXi host has in pool for all your currently running VMs. The hypervisor gives them just enough frequency they actually need. This is also what you get in the Summary page of your ESXi host - the total frequency your CPU has.

The multi-CPU core count comes into play when a VM requests them. VMs using more than 1 vCPU must be provisioned with the same amount of the physical cores for that operation. So for a machine with 4vCPUs, 4 physical cores must always be free at one point of time to process the instruction. Also, VMs' vCPUs migrate all voer physical CPU cores in realtime depending on their physical contention. Literature tells that depending on the workload, you can serve give or take 3 vCPUs per one physical core.

Of course it all depends on the workload and CPU peak times. Observe long-term CPU usage of your ESXi host (especially the Ready value) once you build the new VMs - you can always migrate them off if they cause problems.

The bottom line: If RAM is not a constraint and you are not facing any CPU peak usage yet, then go ahead and build the servers on your already running host Smiley Happy

Stop by my blog if you'd like 🙂 I dabble in vSphere troubleshooting, PowerCLI scripting and NetApp storage - and I share my journeys at http://vmxp.wordpress.com/
0 Kudos