VMware Cloud Community
Gruelius
Contributor
Contributor
Jump to solution

guest with more vCPU than pCPU a bad idea?

Hi All,

My understanding of vmware was that a guest with high parallelism should not have more vCPU than physical cores on the host. We have a high performance SQL server VM (Windows 2008 R2 soon to be 2012) with 16 vCPU on a Esxi 5.1 host with 2x8 core processors, giving a total of 32 logical processors. it sometimes sees spikes of 100% across all vCPU.

Anyway they want to run 24vCPU and as i dont have a test enviroment to try this out on to confirm my thoughts, I have come here.

If we allocated more vCPU than physical cores to one single guest, would this mean that it would work fine for certain loads but when it went past a certain tipping point (no longer being able to have all vcpu scheduled to physical cores) it would spiral into huge wait times? I guess it just depends if Vmware can communicate to the guest which vCPU are on the same physical core.

And unfortunately we are not able to scale this guest back at the moment... we are looking at extra guests with less vCPU and replicated sql instances on the same host to overcommit the hardware and make better use of our processing power

0 Kudos
1 Solution

Accepted Solutions
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

Esxi 5.1 host with 2x8 core processors, giving a total of 32 logical processors. it sometimes sees spikes of 100% across all vCPU.

From, the esxtop or from vsphere client, find out who is hogging all the cpu cycles.

If we allocated more vCPU than physical cores to one single guest, would this mean that it would work fine for certain loads

Might not be a good option in a environment where latency is not appreciated.

I guess it just depends if Vmware can communicate to the guest which vCPU are on the same physical core.

You can do that by setting the CPU Affinity to the particular VM where the performance is expected more.

i managed to convince the relevant parties to just go for more smaller VM's and resource allocation and that has worked well.

That is an excellent suggestion too.

View solution in original post

0 Kudos
5 Replies
DITGUY2012
Enthusiast
Enthusiast
Jump to solution

Did you find your answer? we're wondering something similar.

0 Kudos
Gruelius
Contributor
Contributor
Jump to solution

We did not get time to test, i managed to convince the relevant parties to just go for more smaller VM's and resource allocation and that has worked well.


Id just build a test VM if you can and watch readytime.

0 Kudos
ScreamingSilenc
Jump to solution

Check below links for vCPU Best Practices

Virtualization: vCPU Provisioning Best Practices | The Little Things

www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf

Please consider marking this answer "correct" or "helpful" if you found it useful.
0 Kudos
zXi_Gamer
Virtuoso
Virtuoso
Jump to solution

Esxi 5.1 host with 2x8 core processors, giving a total of 32 logical processors. it sometimes sees spikes of 100% across all vCPU.

From, the esxtop or from vsphere client, find out who is hogging all the cpu cycles.

If we allocated more vCPU than physical cores to one single guest, would this mean that it would work fine for certain loads

Might not be a good option in a environment where latency is not appreciated.

I guess it just depends if Vmware can communicate to the guest which vCPU are on the same physical core.

You can do that by setting the CPU Affinity to the particular VM where the performance is expected more.

i managed to convince the relevant parties to just go for more smaller VM's and resource allocation and that has worked well.

That is an excellent suggestion too.

0 Kudos
0v3rc10ck3d
Enthusiast
Enthusiast
Jump to solution

VM's should be sized appropriately per the workload need.

If you allocate more vCPU's to a VM that it needs it will actually cause poorer performance via cpu ready.

A VM workload is scheduled on the physical cores, think of it like a 4 lane toll road (for a 4 core processor)

If you have a CPU with 4 vCPU's allocated you need to have all 4 lanes open in order to push through a process, if you had only allocated 2 vCPU's it would get scheduled easier as only two lanes would be required. This is manifested in the form of latency as you have to wait for the lanes to be open even if the VM doesnt actually need it.

Select a VM, and go to the performance charts and go to advanced. Select CPU and view the Usage % metric.

If you have a 4 vCPU VM and it never exceeds 50% usage then it would be better off with 2 vCPU's.

Also for increased performance allocate vCPU's in a matching format to the underlying physical CPU's. Each physical CPU has local cache that can be accessed by it's cores.

So if you have two physical CPU's with 4 cores each, don't allocate 1 socket with 8 cores of vCPU to a VM as the VM will try to access cache from half it's cores that wont be available.

PS. VMware recommends not allocating more than a 3:1 ratio of of vCPU to physical cores. If you have 16 physical cores dont go allocating 16 vCPU's to 20 different VM's or you'll have a bad time.

VCIX6 - NV | VCAP5 - DCA / DCD / CID | vExpert 2014,2015,2016 | http://www.vcrumbs.com - My Virtualization Blog!