VMware Cloud Community
dmorgan
Hot Shot
Hot Shot
Jump to solution

Performance Recommendations - Single vs. Multiple vCPU's

I have a few questions regarding performance of VM's. We want to test some performance numbers for some of our database apps. I have a physical server running a particular Oracle 8 database. What I wanted to do was to create a VM and load that same database, run some queries, and test the performance. With regards to CPU, I was under the impression that using dual vCPU's in many cases is actually slower than running a single vCPU. I understand about the time slices required for physical cpu's, and this all makes sense. However, I set up my test VM this weekend either I did something wrong in my setup, or I am not getting the performance I expected. We have a small environment, 4 ESX servers running on Dell 1955 Blade Servers. What I did was VMotion all running VM's to the three other blades, leaving this one blade available for nothing but this single VM. I installed Windows Server R2 Enterprise on this VM, and set up the Oracle database. What I was expecting was that with 8 cores available, and all of the memory available, that setting a higher CPU reservation that a single core has, and allowing it to go as high as it wanted, that it would essentially use more than a single phsical CPU to do the processing, even though it has only a single vCPU. I never saw the mHZ in the performance tab go above 2.x gHZ, roughly the equivilent to a single core. Am I incorrect in assuming that with only this one VM on this ESX server, that it should be able to use more than a single physical core to process the needs of this VM? I haven't gotten to timing any queries yet, and comparing, but I was expecting to see the total CPU usage go above that of a single core first. Any suggestions? Thanks in advance.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Reply
0 Kudos
1 Solution

Accepted Solutions
RParker
Immortal
Immortal
Jump to solution

Your knowledge about ESX 3.5 architecture is a bit off. Firstly, a sole VM on an ESX host with 8 cores, doesn't automatically grant you exclusive access to the machine. When you setup a VM, you will note that you give it physical parameters that define the limits of that VM, despite the actual parameters are virtual.

You setup a VM with 1 CPU. It can't go beyond 1 CPU not matter what you do. That's 1 CPU. your ESX host is designed to give time slices to VM's by dividing them across VM parameters. 1 CPU / 512 Meg RAM VM is all you get. 7 other cores will each contribute (based upon ESX computations) up to the MAX set fourth by the VM *.vmx file. Also shares isn't a factor until the ESX host is 90% of capacity. If you have 8 VM's and they were ALL pinning the CPU at 100%, then the ESX server would be forced to prioritize based upon share level. If all the shares are the same, then it divides them equally. If 1 VM is at high priority and the rest are at low, the high priority VM will ONLY get its resource when the ESX server is having trouble keeping up. Otherwise ALL VM's run at the same level of shares and get EQUAL time on a host, despite the share level. The shares are there to ensure that VM's that demand a higher share level get theirs first.

Imagine a pie that has 8 slices. You have 8 homeless all needing food. They each will get their food, but they might get theirs first based upon factors like who got there first, who is bigger, or maybe they had to draw numbers, and line up according to the number they drew.

That number decides the first person to get what they need. Same with ESX. ESX makes each VM draw (based upon pre-set shares) for their priority. VM's with the same priority fight over equal distribution, VM's with higher shares get theirs before the lower shares and IF there is something left over, the rest of the VM's may get a share. Shares are not there to make 1 VM or a pool the winner all the time, they only kick in when the ESX host needs to allocate resources due to scarcity. When the resources are highly available, there is no reason to priority, there is enough for everyone.

View solution in original post

Reply
0 Kudos
10 Replies
mcowger
Immortal
Immortal
Jump to solution

How many vCPU's does your VM have? If you have only 1 vCPU, will only be able to use a single core's worth of performance.

I dont know why people say 2 vCPU's is SLOWER than 1 - its simply not. Its nearly 2x as fast assuming you dont have contention. Now, granted, there is increased liklihood of contention with lots of dual or quad vCPU VMs, but its certainly not always SLOWER. Oracle is an EXCELLENT example of a multithreaded app that does WELL with 2 or 4 vCPUs. Try it.

--Matt

--Matt VCDX #52 blog.cowger.us
spex
Expert
Expert
Jump to solution

If your vm has one virtual cpu assigned it has only one CPU inside for scheduling processes an threads - regardsless how many CPU's are left idle.

Since this physical cpu is nearly transparently passed through to the vm it has no more capabilities as a vcpu than a real one.

Regards

Spex

Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

> I dont know why people say 2 vCPU's is SLOWER than 1 - its simply not. Its nearly 2x as fast assuming you dont have contention

Contention doesn't have anything to do with it. It's scheduling. AN ESX server has to provide ALL processors at the the time a VM requests it, it can't give 1 CPU then the other, they both have to be available. As your ESX server becomes busier, it has less time to allocate to VM's that need more CPU's, thus you get less and less time slice, which causes your VM to wait for CPU cycles, thus causing the programs inside the VM to miss a few cycles, and then it causes delays which causes slower performance.

so it does get slower, depending on the number of multi-CPU's you have and the CPU of the physical cores on the ESX host. Also Adding More CPU's isn't necessary unless you have applications that can use it. Just because a VM shows both CPU in use, don't assume that to mean that meaningful work is being performed.

dmorgan
Hot Shot
Hot Shot
Jump to solution

Thanks Matt for the timely response. So I guess my assumptions were wrong then. So really assigning CPU reservations and thresholds above the sum total of the vCPU's doesn't gain anything. I guess that the best way to go then is with a dual or quad cpu database server, and test with that. Now, assuming that we expand in the future and want to run multiple database servers as VM's. If these are all dual or quad core vCPU's, there is the increased likelyhood for contention, correct? That being said, if the queries that are run on these are not run simultaneously, we can have many quad-cpu VM's running on a single blade with 8 physical cores. So even though there are 8 physical cores, we can run say 4 database servers with quad core vCPU's on them. So long as we don't have more than two database servers processing large queries at the same time, we should avoid contention problems, correct?

Thanks again for the help.

Don

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Reply
0 Kudos
sbrodbeck
Contributor
Contributor
Jump to solution

VMware can not aggregate or daisy chain multiple cores together to create more than 1 CPUs worth of processing power for a VM. While VMs can use multiple vCPUs, the OS on those VMs need to have some knowledge of how many vCPUs have been presented so they can assign work for each one. Since your Oracle VM you created only had one vCPU presented to it, Windows will only assign work for a single thread at a time. To gain more than a vCPUs worth of performance, you will need to present multiple vCPUs to the VM.

Oh, I remember Windows makes a nasty decision at OS installation about with kernal to load. You may run into some problems still unless you present another vCPU and probably reload the OS to activate the multicore kernal. Once you do this, you should see more than 2GHz of processing from the VM.

Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

it has everything to do with contention, which is exactly what you said.....what do you think scheduling of busy resources is - its CONTENTION.

Assuming you DONT have contention problems, a 2vCPU VM will have nearly TWICE the computing power of a single vCPU VM. period. Granted, not all application will effecitvly use that. However, the OP's application, Oracle, ABSOLUTELY will.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

That being said, if the queries that are run on these are not run simultaneously, we can have many quad-cpu VM's running on a single blade with 8 physical cores. So even though there are 8 physical cores, we can run say 4 database servers with quad core vCPU's on them. So long as we don't have more than two database servers processing large queries at the same time, we should avoid contention problems, correct?

Pretty much correct. We have a number of VMs on a single 8 core host, each of which is 2 or 4vCPUs. Because they aren't all working hard at the same time, they are all generally find and we dont see a ton of contention. Be watchful of it.

Things like shares and reservations only come into play once you have contention. Before then, your limit is the number of configred vCPUs.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
dmorgan
Hot Shot
Hot Shot
Jump to solution

I guess what I am trying to accomplish here is find the best performance for an application that DOES utilize multi-processors, Oracle. For testing, contention would not be an issue, as I have given the test VM it's own blade to run on. However, in the future, there will be multiple Oracle database servers running as VM's, or at least that is the plan. Thus, we could I guess run into a situation where scheduling/contention becomes a factor. So, my next thought is that if multiple processors is faster, assuming scheduling/contention is not a factor, then couldn't I mitigate this by using resource pools and/or processor affinities?

Thanks,

Don

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Reply
0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Thus, we could I guess run into a situation where scheduling/contention becomes a factor. So, my next thought is that if multiple processors is faster, assuming scheduling/contention is not a factor, then couldn't I mitigate this by using resource pools and/or processor affinities?

If contention is not a factor, resource pools dont reall do much. Again, those only hit when you DO have contention. Once you DO have contention, yes, you can limit the impact with resources pools and appropriate shares levels.

I would avoid using processor affinities if at ALL possible - you really limit your scheduling abilities that way.

--Matt

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
RParker
Immortal
Immortal
Jump to solution

Your knowledge about ESX 3.5 architecture is a bit off. Firstly, a sole VM on an ESX host with 8 cores, doesn't automatically grant you exclusive access to the machine. When you setup a VM, you will note that you give it physical parameters that define the limits of that VM, despite the actual parameters are virtual.

You setup a VM with 1 CPU. It can't go beyond 1 CPU not matter what you do. That's 1 CPU. your ESX host is designed to give time slices to VM's by dividing them across VM parameters. 1 CPU / 512 Meg RAM VM is all you get. 7 other cores will each contribute (based upon ESX computations) up to the MAX set fourth by the VM *.vmx file. Also shares isn't a factor until the ESX host is 90% of capacity. If you have 8 VM's and they were ALL pinning the CPU at 100%, then the ESX server would be forced to prioritize based upon share level. If all the shares are the same, then it divides them equally. If 1 VM is at high priority and the rest are at low, the high priority VM will ONLY get its resource when the ESX server is having trouble keeping up. Otherwise ALL VM's run at the same level of shares and get EQUAL time on a host, despite the share level. The shares are there to ensure that VM's that demand a higher share level get theirs first.

Imagine a pie that has 8 slices. You have 8 homeless all needing food. They each will get their food, but they might get theirs first based upon factors like who got there first, who is bigger, or maybe they had to draw numbers, and line up according to the number they drew.

That number decides the first person to get what they need. Same with ESX. ESX makes each VM draw (based upon pre-set shares) for their priority. VM's with the same priority fight over equal distribution, VM's with higher shares get theirs before the lower shares and IF there is something left over, the rest of the VM's may get a share. Shares are not there to make 1 VM or a pool the winner all the time, they only kick in when the ESX host needs to allocate resources due to scarcity. When the resources are highly available, there is no reason to priority, there is enough for everyone.

Reply
0 Kudos