VMware Cloud Community
grob115
Enthusiast
Enthusiast

vCores

While reading one of the ESXi's documents on the number of virtual CPU that can be assigned to guest OSes, I recall one part says the number of virtual cores can not exceed the number of physical cores. They're talking about the total number of virtual cores on any given guests, not all of the guests combined, right? So if I have a quad core CPU with no hyperthreading, I can create say 5 VMs each with 2 virtual CPUs, right?

0 Kudos
6 Replies
JimKnopf99
Commander
Commander

Yes thats possible. But then you have an overcommitment.

You have to check your Host CPU state

frank






If you find this information useful, please award points for "correct" or "helpful".

If you find this information useful, please award points for "correct" or "helpful".
0 Kudos
kac2
Expert
Expert

yes you are correct.

If you have a server with 8 physical cores, you can have VMs w/ 8way SMP. Likewise, you can have 50 VMs all with 2vCPUs. You are overcommitting, but there are some things to keep in mind. The more vCPUs you give to a server, the more likely it is to get it's computations.

If you have 10 VMs w/ 4vCPU and 20 VMs w/ 1 vCPU, you are giving the 10VMs w/ 4vCPUs the majority of CPU resources. Therefore, giving your 1 vCPU VMs a longer wait time for their computations. It's always a safe bet to have the majority of your VMs configured w/ 1vCPU. That includes SQL and Exchange servers Smiley Happy

0 Kudos
grob115
Enthusiast
Enthusiast

Can you tell me more about what and how to check on the host? Any specific settings for the guests?

It's a Xeon 3430 powering the following. Would this be too much? How much RAM should I leave for the ESXi? It has 8GB in total so with 7GB for VMs, it still would have 1GB for the ESXi.

VM

CPU

RAM

Web

2

2GB

Web

1

1GB

dB

2

2GB

BIND

1

1GB

Mail

1

1GB

Total

7

7 GB

0 Kudos
mudtoe
Enthusiast
Enthusiast

The thing to remember about how ESXi goes about running VMs is that it dispatches vcores (which basically each represent a physical core on a physical CPU), using a time slicing algorithm, not an interrupt driven algorithm as is typically used to dispatch processes in a regular operating system. What this means in practice is that if you assign say 2 vcores to a guest, then 2 vcores must be available before that guest can run at all, and also once it starts running (i.e. getting its time slice) those two vcores (and hence the two physical cores) are not available to any other VMs until that guest has finished it's time slice, even if the guest is mostly idle during it's time slice. Therefore you have to be careful not to assign a significant percentage of the total vcores available on your system to a specific guest. If you do, you will start to notice choppy performance, even though it doesn't look like the CPU is overcommitted.

As an example, before I understood how this really worked I thought that ESXi worked just like an interrupt driven operating system and that all I had to do is assign the right priority to the workload, and then I could assign as many vcores as I thought a particular VM might ever need. In my case, on a 4 physical core system I assigned 4 vcores to an SBS 2008 VM, and a single vcore to a couple of other WinXP VMs. I noticed that the SBS 2008 VM would always run very choppy when I logged on to it from the RDC console. The reason why was because before the SBS 2008 VM was allowed to run at all, all of the vcores had to be available, which of course meant no other guest could be running, and ESXi itself had to get out of the way too so that the guest could have all the CPU resources in the computer.

After going back and forth with some folks in this forum I finally understood that ESXi was unable to use vcores that were assigned to a running VM, that just happened not to be using them all at the current time, and assign them to another VM at the same time, letting them share the vcore via an interrupt driven strategy. When I cut the number of vcores to two for the SBS 2008 VM, things ran much better, even though it took longer for it to start up and shut down (the only time it really could make good use of all four vcores was during startup, shutdown, and when patches were being applied). After a little more playing around I found that the best compromise was to reserve one vcore for the SBS 2008 VM exclusively (by using the Scheduling affinity to prevent the other WinXP VMs from ever using one of the four vcores) and let it compete for one of the other three vcores with the WinXP VMs using the time slicing parameters. As the SBS 2008 VM was the most important workload on the machine, it made sense to reserve one fourth of the CPU resources in the machine for its exclusive use.

Later on, I transferred the SBS 2008 VM, and the WinXP VMs, along with adding a few more Win 7 VMs, to a new Dell 2970 server which had two physical CPUs, each with 6 cores, which resulted in 12 vcores being available. I was then able to reset the SBS 2008 VM back to using four vcores instead of two, because on a twelve vcore system that only represented one third of the available vcores. I was also able to set the WinXP and Win 7 VMs to two cores instead of one, as that only represented one sixth of the vcores available. I also used the same trick I did on the previous box and reserved two of the twelve vcores for the exclusive use of the SBS 2008 VM so that it only had to compete with the desktop Windows VMs for two of the four vcores it needed to run.

This also highlights the fact that it's better for ESXi, if you are going to run several VMs on it at once, to have a larger number of slower vcores, than a fewer number of faster ones, even if the total computing power of both configurations is theoratically the same (unless you are running VMs that do something like scientific calculations in a single threaded manner) because of how the time slicing works.

mudtoe

0 Kudos
dkfbp
Expert
Expert

Oversubscribing cores on and ESX is normal practise. You will easily be able to run 3-4 vCPUS pr. physical core in an average enviroment.

To check if your vCPU gets enough access to physical cores you need to check the "%cpu ready" counter. In ESXTOP you want to see a value of 5% or lower. In vSphere Client performance tab that equals to 1000ms.






Best regards

Frank Brix Pedersen

blog: http://www.vfrank.org

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
0 Kudos
J1mbo
Virtuoso
Virtuoso

Sometimes it is suggested that vCPU to core ratio of 4:1 overall is acceptable, obviously it depends on the workloads and absolutely number of physical cores (with less cores it is more difficult to successfully overcommit).

This might be useful: http://blog.peacon.co.uk/understanding-the-vcpu/

HTH

http://blog.peacon.co.uk

Please award points to any useful answer.

0 Kudos