VMware Cloud Community
Mark_G
Contributor
Contributor

How many 4-way SMP VMs can I put on an ESX Server?

Hi all. Please read the whole thread before you flame me for the subject... Smiley Happy

I have several ESX Servers with four 4-core 2.13 GHz Intel CPUs in four slots (a total of 16 cores, per box).

Let's say that I want to P2V my two physical servers:

  1. Microsoft SQL Server (which can exploit multiple CPUs) running at a pretty consistent 3.5 GHz on its physical

  2. Microsoft SQL Server running at a pretty consistent 5.5 GHz on its physical

The solution to the first one is obvious: just make a 2 vCPU VM. That's 4.26 GHz, well in excess of the need, and easy for the vmkernel to schedule.

The second one would run happily within a 4 vCPU VM, but would be processor-constrained if it were in a 2 vCPU VM (regardless of any reservations or shares). So we go to 4-way instead, and 4 x 2.13 GHz = 8.52 GHz, also well in excess of the need.

So my question is about cores vs. CPUs (or CPU slots, if you prefer). I generally try to avoid doing 4-way SMP in VMs on 4-CPU ESX Servers, irrespective of the number of cores per CPU (slot).

But is there really a technical limit here? Or are we just avoiding it out of habit, based on experiences in the good 'ol pre-multicore days? Given enough single vCPU VMs, the vmkernel could just as easily schedule a "wide" VM on processors 0, 4, 8, and 12... or processors 1, 5, 9, and 13 , etc.(when proc 0 is busy). Or should I plan to have a maximum of three such 4-way VMs on any given ESX Server (using anti-affinity), so that the "wide" instructions from the 4 vCPU VMs don't bump into each other?

Is there a new rule of thumb here?

Thanks,

Mark

Reply
0 Kudos
11 Replies
weinstein5
Immortal
Immortal

Typically yes it is out of habit - at minimu you will want twice the number of cores than the maximum number of vcpu you have in a vm - so a quad vcpu vm you will need at least 8 cores (yes I said cores) but I personally will double that to 16 cores - also as you indicated use virtual smp only when it makes sesne and it appears you have done your homework - I have had too many clients who create quad vcpu vm just because the physical box was with out taking a look at resource consumption and run into issues with the the scheduling of the virtual smp box causing degraded performance

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
mcowger
Immortal
Immortal

The technical limit is 128 vpCPUs per host, so that would max you out at 32 4 vCPU VMs per host.

Obviously, you'll run into performance problems long before that.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
MattG
Expert
Expert

Yes, you wouldn't want to put 4 x 4vCPU VMs on the same host (at least not running at the same time) because there will be at minimum contention with one of the VMs and the Service Console's CPU slice.

You bring up an interesting point though, with virtualization and large #'s of CPU cores, it would be an interesting concept for the x86 architecture to be modified to allow for odd #'s of CPUs to be assigned to an OS (I am assuming it is an architecture limitation?).

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
Reply
0 Kudos
mcowger
Immortal
Immortal

There is no such limitation in x86.

Its 100% possible (though not supported) to run 3 vCPU VMs - just edit your VMX file. It works fine in physical machines too (witness the new 3 core AMD processors).

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
MattG
Expert
Expert

That's pretty cool. I hope VMWare adds support for it.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
Reply
0 Kudos
Mark_G
Contributor
Contributor

Great, it seems I've touched a nerve here. Smiley Happy

Would a new formula therefore be (TOTAL CORES - 4) /4 = MAX, where "MAX" indicates the maximum number of 4 vCPU VMs allowed to run on a single ESX Server before wait states will necessarily result (from at least the work of proc 0), irrespective of processor cycles going unused on other processors?

In other words:

CPUs

Cores

per CPU

total

cores

number of

4-way VMs

8

4

32

7

4

4

16

3

4

2

8

1

2

4

8

1

2

2

4

none

1

4

4

none

Is this right?

Regards,

Mark

Reply
0 Kudos
weinstein5
Immortal
Immortal

This might be a guide - but I go back to my original comment only choose SMP when you emperical evidence that a physical machine needs virtual SMP - idealy start with a uni vcpu and add additional vcpus only if perfomrance of the vm dictates - it all goes to scheduling - since each vcpu needs to be scheduled simultaneously it is easier to schedule a single vcpu then a dual vcpu and a dual vcpu than quad

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
Aketaton
Contributor
Contributor

I reallyliked this article regarding vSMP:

Give it a read Smiley Happy

ajnrock
Enthusiast
Enthusiast

IMO based on recent flirtations with 4 Way Guests, if I truly need 4 CPUs, I am very likely trading off maximizing my VI ROI in a real big way.

My config is a 2 Host DRS/HA cluster, 4 Way Quad Intel 3 GHz 7350s, 64 GB RAM. (Third one is ordered :smileycool: )

Currently have ~ 65 guests running on this config. About half with one vCPU, About half with dual vCPUs. Performance is awesome/adequate in every way, except for.......... the quad vCPU VMs.

In this mix, I HAD until a week ago a total of 8 guests (4 on each host) with 4-Way vCPU configs, 2 SQL 2005 in a MSCS Config & 6 Production Exchange 2007. I ran this way for ~ 6 months and was relatively satisfied with performance. But...... I had a few other stability issues which I never attributed to performance. It turns out, the fact that I was ignoring was that all of the 4-Way vCPU guests had insanely high CPU Wait Times (%RDY). And on all 8 VMs I was having a random BSOD & Reboot in this subset of 8 VMs maybe once a week, and on the SQL MSCS Cluster, I was seeing the network stack momentarily hang several times a day and maybe twice a weak this would cause a MSCS cluster failover. So for the last six months, I have been chasing problems I believed were network related to no avail. After a great heart to heart with VMWare tech support on an unrelated issue, I was given some great advice regarding my 4 Way boxes and I started playing............

What I found - I dropped my Exchange boxes down to 2-Way vCPU guest, performance and stability increased on all guests, period. I also saw %RDY numbers halved on my 4-Way SQL guests and overall %RDY numbers drop on all guests in cluster.

I then dropped down to 2-Way vCPUs on both SQL Guests, performance change was negligible, %RDY numbers dropped significantly, and overall %RDY number dropped a bit. My "networking" issue is resolved. Network stack and MSCS cluster is not timing out anymore. Exchange performance is great. So far no random BSOD, I would have expected one by now out of this group of 8 guests.

What I learned and will apply to my environment.

1. I might consider putting one guest per 16 Core host back to 4-Way vCPUs, but I would really have to need the CPU resources in a real bad way.

2. Due to the scheduling conflict increases introduced by 4-Way guests, if I needed to do more than 1 guest, I would have to use reservations and reduce number of VMs per host.

3. If I need more CPU resources in my SQL MSCS Cluster, I am likely better off with four 2-Way guests than two 4-Way guests. Allows more flexibility in CPU resource scheduling.

4. If I NEED 4 CPUs, I NEED 4 CPUs. I either need more core density so I can set affinity and reservations, or a dedicated physical box.

Hope this helps.

Reply
0 Kudos
Mark_G
Contributor
Contributor

Very helpful data point, ajnrock. Thank you.

So, if I'm reading this correctly, your "recent flirtations with 4-way Guests" may bolster my point. You had eight 4 vCPU Guests, distributed four each on two rather beefy ESX Servers. Stability and performance suffered, you say, until "you started playing..." Would it therefore be fair to say that performance improved once you removed the fourth 4 vCPU Guest from each box, leaving only three 4 vCPU Guests (and thus room for interference-free vmkernel operations on proc 0) on each ESX Server?

(BTW, if our hypothesis is accurate, then such a config isn't a truly an HA cluster... someone with your original config might want to double-check the VMware HA settings, to make sure it's actually running at n+1 in the HA cluster. And in any case, you might not even be able to fail over all six remaining 4-vCPU Guests onto one box, to say nothing of the otherr 59 other Guests). Or did you get rid of all your 4 vCPU Guests?

Reply
0 Kudos
ajnrock
Enthusiast
Enthusiast

To answer your first question I would say probably, but I did not spend a ton of time analyzing my performance at three 4-WAY VMs per host. I was focused on resolving a problem. I did most of my analysis at 4, 1 and 0 VMs per host with 4 vCPUs. Keep in mind all of the othe 30+ guests I have on each of my hosts with various levels of resource requirements. The resource requirements of the 4 Way guests also come into play, it is not simply a problem with having 4 vCPUs in a VM. For us it is about maximizing VI ROI and we discovered we did not want to pay the 4 vCPU "penalty", instead we are taking advantage of how far we can push the envelope 1 or 2 vCPUs. I suspect that if all I had on the 2 hosts were the eight 4 vCPU guests, life would have been peachy, and I could have even added a few more depending upon how demanding my guest applications were.

And I think this has alrady been stated in this thread or the other link, but keep in mind that with VMWare & SMP, if 1 of 4 Physical CPUs (cores) need to be scheduled, it needs to schedule all 4. This is just the way the technology works. If VMWare found a stable way to make all four vCPUs independent and schedule them idependently, then the rules would change, but then would it be SMP?

I explained my findings to my team with an Outlook Meeting Scheduling analogy. Take a team of 16 people with various levels of time "wasted" in meetings. Some people are always scheduled in meetings, others are generally easy to find time to talk to. Now say you want to schedule a meeting with this group, when you look at their free / busy info, there are a vastly more options to find time slots for one on one time than there is to find time to get any 4 people together. If you must have the meeting with four people you are going to have to wait longer to have your meeting to waste time. There are any number of ways to beat this analogy to death, but at least the CPUs in VMWare show up on time Smiley Wink

HA is only as good as you design and plan for it. As is always the case with N1, the cost of true N1 is much higher with 2 hosts then it would be with 3, 4, 8 hosts etc, You have to design in the resources to sustain a host failure, accept performance degredation or blend the two. In my HA config I only start critical VMs from the failed hosts. Everything else stays down. I have suffered a host crash and performance hit was acceptable. Also, I am augmenting my HA config with MSCS in some places, with a virtual MSCS node on each host. If I loose a host, there is no reason to bring up a failed virtual MSCS node on the other ESX node.