VMware Cloud Community
NHessonSD21
Enthusiast
Enthusiast

Hyperthreaded Core Sharing VM setting

Hello all,

I need some clarification on "Hyperthreaded Core Sharing" VM Setting. Here is the configuration: Dell PowerEdge 6850 w/ 2 x Xeon dual-core 3.00 GHz CPUs, with Hyperthreading Active, 12GB Ram. This host has only three VM's configured with: Win 2k3 64-bit OS, with SQL installed.

Now the problem, all three VM's were performing slowly. Windows GUI is slow, application load times were bad, just your general slow machines. If we look in the VIC, we see that avg. CPU usage in MHz was around 2000 Mhz on the ESX Host. Shares are normal for all three VM, no limits are set, and we don't have any reservations. And all three vm's are just to slow.

So we start setting shares and reservations. But nothing was making a big difference. Then we set, the "Hyperthreaded Core Sharing to None". Now the VMs are performing a lot better.

What is up? The ESX host has hyperthreading enable, and the VM's default to any for that setting. All the VM's are only running (right now) Windows and SQL server (64-bit).

Here are my questions. Why did changing this setting help (I read documentation on this setting, but I still don't know why this helped me)? Should I disable hyperthreading on the host? If this setting affects just your plain windows VM's why is it a default setting? Does the VM's running 64-bit OSes, mean anything to this problem?

Thanks for your time and help (as I am lost),

Nick

0 Kudos
2 Replies
VMWareNewbie
Enthusiast
Enthusiast

I got this answer from this link. Below is the info on the page that will help explain.

The main reason for this is the 2vCPU setup. Explanation: If - on the same virtual infrastructure cluster - there are running VMs with different numbers of vCPU (e.g. one vCPU and two vCPUs) then there is a good chance that one vCPU of your dual vCPU VM can work alone on one physical CPU and the other vCPU has to share a physical CPU with another VM. This causes tremendous synchronization overhead between the two vCPUs (you don't have this in real physical multi-CPU machines because this sync is hardware-based) which can cause the System process within the VM to go up to 50-100% CPU load.

There are some ways to work arround this:

- Limit the number of vCPUs to one. Even if the Multi-CPU-HAL in the VM is still installed after the reduction of the number of vCPUs, within our configuration had this a tremendous positive effect (sounds strange that 1vCPU works faster than 2 vCPUs but with us this was the case and you can find references to this if you use google) on CPU performance and stability in the VM (first choice and highly recommended)

- Divide you virtual infrastructure into 2 clusters: One for VMs with 1 vCPU and one for VMs with 2vCPUs (and a 3rd one if you insist on having VMs with 4vCPUs). --> second choice and also highly recommended

- Under 'Edit virtual machine settings' open the Tab 'Resources' and Click on 'Advanced CPU'. Set the 'Hyperthreaded Core Sharing' to none. Make sure that you have enough physical CPUs in your Virtual Infrastructure cluster to provide each VM 2 unshared, physical CPUs (less recommended but still helps if you have really an 'oversized' Virtual Infrastructure).

0 Kudos
kjb007
Immortal
Immortal

Remember, a hyperthreaded "core" is not a real "core". If you are using SMP virtual machines, ESX will try to co-schedule the vm's to cores that are close together. ESX is NUMA aware, so it will be as efficient as possible regarding CPU cache and memory, and with that setting, it will utilize hyperthreaded cores as well, since that is pretty much the same CPU as the first. I disable hyperthreading from the start. While it can help in some cases, I prefer not to use it at all as it can lead to situations as the one you're witnessing.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos