VMware Cloud Community
vmproteau
Enthusiast
Enthusiast

CPU sizing VM/core vCPU/core best practice

I know this has been discussed ad nauseam but, I wanted to post this as I am sizing for a new environment.

I have heard best practice numbers for vSphere with newer processors at 6-10 per core. Unfortunately I have seen this sometimes referenced as VM/core and other times vCPU/core.

  1. vCPU/core seems the more useful number. Can can anyone verify if this best practice swag was meant for vCPU or VM?
  2. Would logical processors factor in (i.e hyperthreading) with respect to calculations or should that be ignored.
0 Kudos
12 Replies
AndreTheGiant
Immortal
Immortal

1. Recommended practice is 1 vCPU for each VM, except where you need more Smiley Happy

You can consider the CPU requirements (usually are now in term of cores) as number of vCPU.

2. Logical Processor (in Configuration / Processor are called in this way) are not usually condidered... They can improve scheduling, but cannot double the number of cores.

The ratio between vCPU and physical host cores this could depend.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
mittim12
Immortal
Immortal

I have always heard between 6 and 10 as well but it really depends on your workload.   The maximum configuration guide actually tops out at something like 25 per core, http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf.  

0 Kudos
vmproteau
Enthusiast
Enthusiast

25...yikes. Yes. I realize it is workload specific but, that can be a complex calculus. I suppose acceptable CPU latency is a function of end users noticing.

Smiley Happy

I was really just trying to size a new environment with an appropriate amount of memory. No sense loading them with 2TB if CPU contention will prevent you from using it.

So if I use a best practice number like 6-vCPU/core, I can at least extrapolate an upper memory baseline.Using the average sizes of VMs in my current environment, 512GB on a 40-core Host would be just about right (although I'm not sure I want to have that large a Host)

0 Kudos
mittim12
Immortal
Immortal

In my experience I have always ended up memory bound.

Sent from my iPhone

0 Kudos
vmproteau
Enthusiast
Enthusiast

Same here. Never even had to consider CPU calculations but, a Proliant DL580 G7 can support 2TB of memory. Even with 40 cores, I'm certain we would encounter problematic CPU latency. Thanks for all the comments.

0 Kudos
azn2kew
Champion
Champion

Each environments are different and each best practices tested/published cases were also different then your workload, its great to understand the basic best practices, but when you want to put an actual design and implementation for large scale project, you will need to accurately test and analyze the workload of those applications and tweak it to the point where acceptable.  Especially if you plan to virtualize SAP, Oracle, SQL and Exchange servers.

The host you have is a beast and make sure you take full advantage of it otherwise use consolidated blade solutions such as Cisco UCS, HP C7000, or IBM HS22 series.  I'm not sure how large scale your virtualization be but there are tons of great papers out there to read.  Don't put too much eggs in one basket, sometimes its best to distribute the load evenly across smaller hosts horizontally.

WORKLOAD/STRESS TESTING IS THE KEY TO YOUR SUCCESS, YOU'RE THE ONLY ONE WHO KNOWS WHERE THE SWEET SPOT!!!

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
0 Kudos
vmproteau
Enthusiast
Enthusiast

I agree and prefer to grow out rather than up. We're primarly an HP shop so, ultimately we'll likely land on DL380 G7, dual Hex core, and 160GB-192GB memory. I'm forced to offer what a monster box alternative for comparison. We looked at HP c7000 and Flexfabric as well but, can't really make the numbers work for our environment. Also want to wait a bit more for converged Infrastructure to mature a bit.

I also agree that there is no substitute for proper baseline, testing, and analysis but, alot of those tools are more suited for right sizing individual VMs rather than determining proper CPU/memory ratio per Host/Cluster. Regardless, the deadlines we're under won't allow much discovery. I will know very little about the applications, let alone the workload, until just before we import them. So, good or bad, I'll have little choice but to rely a bit more on best practices and case study numbers.

0 Kudos
FredPeterson
Expert
Expert

Server 2008  seems to very much prefer 2 vCPU as compared to 2003, so that should be considered.

Plus, the whole "1 vCPU" thing is really old hat in my opinion back before the ESX scheduler matured significantly.

Theres a reason why VMMark etc don't even have 1 vCPU VM's in the tiles.

0 Kudos
AureusStone
Expert
Expert

Can you expand on Server 2008 prefering 2vCPU?  I have a bunch of 2008 VMs running with 1vCPU no problems.  A lot of them barely use any resources.

Even with the improved scheduler I can't see any justification to add resources that are not required.

0 Kudos
bulletprooffool
Champion
Champion

I fully agree here - if you have the capacity in your DC I definitely prefer to go with more ESX hosts.

This way, the impact of losing a host is less and my ability to hand failover is better.

I appreciate that this may be slightly less cost effective once I include cost of Networking / Power / AirConditioning / Licensing etc - but if you have enough confidence in VMware, you can design your environment so you run your hosts at higher utilization and use technologies like DPM to bring 'reserve' boxes up in the event of issues.

Generally, the way I see it . . the more baskets for my eggs . . the better.

One day I will virtualise myself . . .
0 Kudos
FredPeterson
Expert
Expert

AureusStone wrote:

Can you expand on Server 2008 prefering 2vCPU?  I have a bunch of 2008 VMs running with 1vCPU no problems.  A lot of them barely use any resources.

Even with the improved scheduler I can't see any justification to add resources that are not required.

The server simply works faster when the additional CPU is there, and I don't just mean when its working hard either.

Server 2003 is snappy regardless of how many CPUs you have (given no other constraints in the environment of course).  My experience has shown 2008 responds, in general, much better with 2.

Guess its just my experience.

0 Kudos
vmproteau
Enthusiast
Enthusiast

AureusStone wrote:

Can you expand on Server 2008 prefering 2vCPU?  I have a bunch of 2008 VMs running with 1vCPU no problems.  A lot of them barely use any resources.

Even with the improved scheduler I can't see any justification to add resources that are not required.

It all depends on your needs. If 1-vCPU is working fine and there is little benefit to some increased performance, it is better to size small. As a best practice start with 1-vCPU. After monitoring, if performance is a concern or if you see that 1-vCPU isn't sufficient, you can always add later. Most of our WIndows 2008 servers are 1-vCPU.

0 Kudos