I know this has been discussed ad nauseam but, I wanted to post this as I am sizing for a new environment.
I have heard best practice numbers for vSphere with newer processors at 6-10 per core. Unfortunately I have seen this sometimes referenced as VM/core and other times vCPU/core.
1. Recommended practice is 1 vCPU for each VM, except where you need more
You can consider the CPU requirements (usually are now in term of cores) as number of vCPU.
2. Logical Processor (in Configuration / Processor are called in this way) are not usually condidered... They can improve scheduling, but cannot double the number of cores.
The ratio between vCPU and physical host cores this could depend.
Andre
I have always heard between 6 and 10 as well but it really depends on your workload. The maximum configuration guide actually tops out at something like 25 per core, http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf.
25...yikes. Yes. I realize it is workload specific but, that can be a complex calculus. I suppose acceptable CPU latency is a function of end users noticing.
I was really just trying to size a new environment with an appropriate amount of memory. No sense loading them with 2TB if CPU contention will prevent you from using it.
So if I use a best practice number like 6-vCPU/core, I can at least extrapolate an upper memory baseline.Using the average sizes of VMs in my current environment, 512GB on a 40-core Host would be just about right (although I'm not sure I want to have that large a Host)
In my experience I have always ended up memory bound.
Sent from my iPhone
Same here. Never even had to consider CPU calculations but, a Proliant DL580 G7 can support 2TB of memory. Even with 40 cores, I'm certain we would encounter problematic CPU latency. Thanks for all the comments.
Each environments are different and each best practices tested/published cases were also different then your workload, its great to understand the basic best practices, but when you want to put an actual design and implementation for large scale project, you will need to accurately test and analyze the workload of those applications and tweak it to the point where acceptable. Especially if you plan to virtualize SAP, Oracle, SQL and Exchange servers.
The host you have is a beast and make sure you take full advantage of it otherwise use consolidated blade solutions such as Cisco UCS, HP C7000, or IBM HS22 series. I'm not sure how large scale your virtualization be but there are tons of great papers out there to read. Don't put too much eggs in one basket, sometimes its best to distribute the load evenly across smaller hosts horizontally.
WORKLOAD/STRESS TESTING IS THE KEY TO YOUR SUCCESS, YOU'RE THE ONLY ONE WHO KNOWS WHERE THE SWEET SPOT!!!
I agree and prefer to grow out rather than up. We're primarly an HP shop so, ultimately we'll likely land on DL380 G7, dual Hex core, and 160GB-192GB memory. I'm forced to offer what a monster box alternative for comparison. We looked at HP c7000 and Flexfabric as well but, can't really make the numbers work for our environment. Also want to wait a bit more for converged Infrastructure to mature a bit.
I also agree that there is no substitute for proper baseline, testing, and analysis but, alot of those tools are more suited for right sizing individual VMs rather than determining proper CPU/memory ratio per Host/Cluster. Regardless, the deadlines we're under won't allow much discovery. I will know very little about the applications, let alone the workload, until just before we import them. So, good or bad, I'll have little choice but to rely a bit more on best practices and case study numbers.
Server 2008 seems to very much prefer 2 vCPU as compared to 2003, so that should be considered.
Plus, the whole "1 vCPU" thing is really old hat in my opinion back before the ESX scheduler matured significantly.
Theres a reason why VMMark etc don't even have 1 vCPU VM's in the tiles.
Can you expand on Server 2008 prefering 2vCPU? I have a bunch of 2008 VMs running with 1vCPU no problems. A lot of them barely use any resources.
Even with the improved scheduler I can't see any justification to add resources that are not required.
I fully agree here - if you have the capacity in your DC I definitely prefer to go with more ESX hosts.
This way, the impact of losing a host is less and my ability to hand failover is better.
I appreciate that this may be slightly less cost effective once I include cost of Networking / Power / AirConditioning / Licensing etc - but if you have enough confidence in VMware, you can design your environment so you run your hosts at higher utilization and use technologies like DPM to bring 'reserve' boxes up in the event of issues.
Generally, the way I see it . . the more baskets for my eggs . . the better.
AureusStone wrote:
Can you expand on Server 2008 prefering 2vCPU? I have a bunch of 2008 VMs running with 1vCPU no problems. A lot of them barely use any resources.
Even with the improved scheduler I can't see any justification to add resources that are not required.
The server simply works faster when the additional CPU is there, and I don't just mean when its working hard either.
Server 2003 is snappy regardless of how many CPUs you have (given no other constraints in the environment of course). My experience has shown 2008 responds, in general, much better with 2.
Guess its just my experience.
AureusStone wrote:
Can you expand on Server 2008 prefering 2vCPU? I have a bunch of 2008 VMs running with 1vCPU no problems. A lot of them barely use any resources.
Even with the improved scheduler I can't see any justification to add resources that are not required.
It all depends on your needs. If 1-vCPU is working fine and there is little benefit to some increased performance, it is better to size small. As a best practice start with 1-vCPU. After monitoring, if performance is a concern or if you see that 1-vCPU isn't sufficient, you can always add later. Most of our WIndows 2008 servers are 1-vCPU.