Expert
Expert

How many vCPUs to start with

What are people finding is the optimum number of vCPUs to start with on a VM?  Do you usually start with 1vCPU?  When doing that, do you usally end up having to add one later?  Is it more practical to just start with 2vCPUs?

0 Kudos
9 Replies
Enthusiast
Enthusiast

In a test/dev environment:

If there isn't minimum requirements for the software on the VM, I like to start with one. It's best to stay conservative, because taking away is so much harder giving. Other than that, I try to stick to the min requirements to whatever application is going to live on that VM.

Set-Annotation -CustomAttribute "The Impossible" -Value "Done and that makes us mighty"
Expert
Expert

I always started with 1, unless proven or needed otherwise...

Follow me @ Cloud-Buddy.com

Blog: www.Cloud-Buddy.com | Follow me @hashmibilal
0 Kudos
Contributor
Contributor

Depends if the application can use 2 or more CPUs.

If what your going to run on the VM can use 2 or more, give it two if you got the resources  Smiley Happy

0 Kudos
Immortal
Immortal

Take a look at Choosing the Right Virtual Machine Settings, about halfway down, "How Many Virtual Processesors?".  Although it's for VMware Fusion, the analogy is a good starting point.

Many who are asked about this, often comment that the tasks the virtual machine is doing often does not need more than one CPU.  So that and the issues with multiple CPU scheduling is why the recommendation is to start with one vCPU unless you have a specific need for more.

Immortal
Immortal

The more unnecessary resources you hand out reduces the potential VM density. There is no substitue for testing and monitoring.

-- David -- VMware Communities Moderator
0 Kudos
Enthusiast
Enthusiast

There's a penalty for over-allocating so don't do that.  There are also tools - both free and for bucks - that let you "rightsize" your virtual machines later.   I was playing around with VMturbo today (there's a free reporter) and it reported both undersized and oversized guests, both for memory and cpu.

With large environments, it's easy to not rightsize your guests and it's possible that you could have a guest that will bury a pair of cpus in an infinite loop and not get spotted. It's also easy to have a guest be out of cpu and have the users be upset when you have excess capacity you're not allocated.

Which direction would you rather be wrong in?

0 Kudos
Expert
Expert

Thanks all for the input.  I think it would be better to be wrong in underallocating in this case, as increasing later is easier than decreasing.  However, I did just want to clarify what you were saying about there being a penalty for overallocating.

Definitely there is a penalty to the environment in general when vCPUs on VMs are overallocated.  For example, I start with 32 logical cores on my physical processors, and add my first VM with 2vcpus.  Even though it only needs one, I don't see a penalty yet because there are still 30 unused logical cores.  Then I add 2 more VMs with 4vcpus each and they only needed 2vcpus each.  Too many vCPUs for the applications insdie the OS to use, but not to many vCPUs to be scheduled on the physical cores without contention, so I haven't seen the penalty yet.

Then comes a point in time where I cross a line of having more vCPUs than logical cores.  Now scheduling and ready times and skew and all of that start to kick in at least in some measure.  Eventually I have 20 VMs with 50vCPUs to schedule on my 32 logical cores, then a few months later as I continue to provision I end up with 128vCPUs among my VMS to schedule among 32 logical cores.  As my vCPU to logical core ratio grows greater, the penalty increases for having to schedule these unneeded vCPUs.

But the penalty is relative to the overcommittment of vCPUs in the environment in general.  If I have a fresh ESX(i) server install, I'm not penalized for the first VM having too many vCPUs, because they can all be scheduled.  I see a lot of things indicating that there is a penalty in that scenario.  But if I'm in the rare situation of having a new unprovisioned ESX(i) cluster with 32 logical cores per server, there isn't an IMMEDIATE penalty although there will be in the future.  Would you agree?

The point I'm trying to clarify is this - that just because there are more vCPUs than the OS and applications can use, I still need to have more vCPUs than logical cores to see the penalty - which hasn't happened yet on some new servers in our environment.  I have seen statistics and tests showing that the rate of increase of performance becomes almost zero at some point when adding more vCPUs than can be used by the application.   However, I haven't yet seen any statistics or tests showing that just giving a VM too many vCPUs on a server with more cores available decreases performance, if the physical server has more logical cores than the total vCPUs on that server.  Am I off here?   Is there evidence to the contrary?  Thanks

0 Kudos
Enthusiast
Enthusiast

You are mostly right.

In the case of overcommited cores, if all your VMs are single vCPUs, you are playing the odds that a core will be available when needed. The more vCPUs required, the more chances your VM will wait for available physical resources. As well, as the hypervisor commits cores to a multiple vCPU VM, those cores are may sit idle until the VM has all the requested cores and processes that instruction.

For an undercommited host, he VM will be able to schedule all the required cores. There is still a small penalty however. For every instruction to be processed, the VM requests a number of cores, the hypervisor has to check if cores are available. It doesn't take long, but it does take longer the more vCPUs you have.

0 Kudos
Enthusiast
Enthusiast

I assume Sergeadam is talking about high ready time.

Article:

http://vmtoday.com/2010/08/high-cpu-ready-poor-performance/

It's no good.

Set-Annotation -CustomAttribute "The Impossible" -Value "Done and that makes us mighty"
0 Kudos