VMware Cloud Community
mishaelpl
Contributor
Contributor
Jump to solution

when pCPU is over committed once HT is enabled?

hello!

Let's say i have a ESX 4.0u3 host with below spec:

4 cpu sockets x 6 cores per socket = 24 physical cores

with HT enabled that gives me 48 logical processors

Now i have 23 virtual machines running on this host configured with 2vcpu each = 46 vcpu's.

Have i already overcommitted physical cpu's ?

I've read many articles which concludes that HT would give me on average only around 10-20% performance increase, and found an articles that states that number of logical CPU's (with HT enabled) should be considered as a number of pCPU in host... Don't really understand why people consider it this way...

I believe it should be considered as number of pCPU = number of physical cores and HT enabled can just help ESX to deal with overcommittement.

What is your view on that and how far would you go with deploying vm's in this situation?

let's say vm's are hosting various services like sql, exchange cas, blackberry, app servers that are ocassionally used for complilations, terminal servers, etc.

thanks in advance!

Reply
0 Kudos
1 Solution

Accepted Solutions
cmacmillan
Hot Shot
Hot Shot
Jump to solution

Upgrade to vSphere 4.1 or 5.0 to get better SMP scheduling...

That said, since you're stating 6-cores and 12-threads (4x 12 = 48) you've got a Xeon E7200 class processor. If you dissect the VMware VMmark 2 results comparing single thread per core results with SMT results you should find a scaling number in the 1.6-1.8 range. This says to me you're getting a 60-80% increase in scheduling and execution througput for the "typical" application profiles defined by VMmark.

This does NOT mean that for critically CPU intensive workloads (i.e. where your coresident workloads are all stressing/demaning 50%+ CPU utilization) that you'll continue to scale in that range. It's been pretty well proven that for high CPU utilization workloads (static as opposed to peaky) you might as well disable SMT (your 10-20% average boost - some workloads show negative results). However, if you're pushing CPU utilization that hard, you are - as you said - overcommitting your CPU resources.

That's a corner case for most/typical enterprise workloads where you're averaging 15-20% CPU utilization (as seen from the virtualized OS) in which case, you're probable fine and NOT over committed using SMT to deliver vCPU to your workloads. If you're hosting the applications you've indicated on the same platform, you're most likely going to run into storage bottlenecks before you hit CPU limitations. Since shared IOPS are typically much more expensive than cores, scaling compute is relatively cheap by comparison.

Suffice to say, if I were creating a hosted revenue model, I would ignore the "benefits" of SMT in that calculation and base the loading on strict cores (i.e. non-SMT-enabled Xeon or Opteron 6200 series). I'd also make the revenue model compensate the platform based on vCPU and vRAM resources alocated (as opposed to used) to discourage wasteful provisioning. Likewise internal "chargeback" models for interdepartmental billing.

Collin C. MacMillan, VCP4/VCP5 VCAP-DCD4 Cisco CCNA/CCNP, Nexenta CNE VMware vExpert 2010-2012 SOLORI - Solution Oriented, LLC http://blog.solori.net If you find this information useful, please award points for "correct" or "helpful".

View solution in original post

Reply
0 Kudos
3 Replies
cmacmillan
Hot Shot
Hot Shot
Jump to solution

Upgrade to vSphere 4.1 or 5.0 to get better SMP scheduling...

That said, since you're stating 6-cores and 12-threads (4x 12 = 48) you've got a Xeon E7200 class processor. If you dissect the VMware VMmark 2 results comparing single thread per core results with SMT results you should find a scaling number in the 1.6-1.8 range. This says to me you're getting a 60-80% increase in scheduling and execution througput for the "typical" application profiles defined by VMmark.

This does NOT mean that for critically CPU intensive workloads (i.e. where your coresident workloads are all stressing/demaning 50%+ CPU utilization) that you'll continue to scale in that range. It's been pretty well proven that for high CPU utilization workloads (static as opposed to peaky) you might as well disable SMT (your 10-20% average boost - some workloads show negative results). However, if you're pushing CPU utilization that hard, you are - as you said - overcommitting your CPU resources.

That's a corner case for most/typical enterprise workloads where you're averaging 15-20% CPU utilization (as seen from the virtualized OS) in which case, you're probable fine and NOT over committed using SMT to deliver vCPU to your workloads. If you're hosting the applications you've indicated on the same platform, you're most likely going to run into storage bottlenecks before you hit CPU limitations. Since shared IOPS are typically much more expensive than cores, scaling compute is relatively cheap by comparison.

Suffice to say, if I were creating a hosted revenue model, I would ignore the "benefits" of SMT in that calculation and base the loading on strict cores (i.e. non-SMT-enabled Xeon or Opteron 6200 series). I'd also make the revenue model compensate the platform based on vCPU and vRAM resources alocated (as opposed to used) to discourage wasteful provisioning. Likewise internal "chargeback" models for interdepartmental billing.

Collin C. MacMillan, VCP4/VCP5 VCAP-DCD4 Cisco CCNA/CCNP, Nexenta CNE VMware vExpert 2010-2012 SOLORI - Solution Oriented, LLC http://blog.solori.net If you find this information useful, please award points for "correct" or "helpful".
Reply
0 Kudos
mishaelpl
Contributor
Contributor
Jump to solution

thank you for a valuable input!

Reply
0 Kudos
mishaelpl
Contributor
Contributor
Jump to solution

Colin,

One thing that i would like to be sure of.

When you mentioned scaling number in the 1.6-1.8 range, did you mean scaling against thread or core numbers?

Reply
0 Kudos