VMware Cloud Community
014
Contributor
Contributor
Jump to solution

Virtual CPU Cores Changed to Sockets

I just realized today that all my virtual machines have multiple single-core sockets instead of 1-2 multi-core sockets. I didn't set them up this way. It must have happened when I upgraded ESX 4.1 to 5.0, or when I upgraded the virtual hardware version from 7 to 8. I'm not quite sure.

Here's an example. I set up a VM with two quad-cores. These results in 8 threads available to the OS.

Sockets: 2

Cores: 4

But now that same machine is set up like this without my doing (still 8 threads for the OS):

Sockets: 8

Cores: 1

Does anyone know why this happened?

0 Kudos
1 Solution

Accepted Solutions
scottyyyc
Enthusiast
Enthusiast
Jump to solution

It's virtualized NUMA. While it doesn't impact performance, it has a huge impact on many things where per-core licensing comes into play. In v4, if you wanted to give a VM 8 vCPU's, that meant 8 sockets, which in many cases would mean having to license 8 sockets. Window Server 08R2 standard, for example, is limited to 4 sockets, but unlimited cores. This meant in v4, you have a limit of 4 VCPUs. Now that limit is gone. My company has already seen big benefits with this. Before this, particularly if the software in question was particularly draconian/expensive with socket licensing, it would sometimes make more sense to go physical, purely because with a simple 2 socket physical system you can easily get 12+ cores.

View solution in original post

0 Kudos
5 Replies
scottyyyc
Enthusiast
Enthusiast
Jump to solution

It's virtualized NUMA. While it doesn't impact performance, it has a huge impact on many things where per-core licensing comes into play. In v4, if you wanted to give a VM 8 vCPU's, that meant 8 sockets, which in many cases would mean having to license 8 sockets. Window Server 08R2 standard, for example, is limited to 4 sockets, but unlimited cores. This meant in v4, you have a limit of 4 VCPUs. Now that limit is gone. My company has already seen big benefits with this. Before this, particularly if the software in question was particularly draconian/expensive with socket licensing, it would sometimes make more sense to go physical, purely because with a simple 2 socket physical system you can easily get 12+ cores.

0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Not sure what is causing this. However, if the VM's log files have not already rolled over, you may download them to find out the when exactly the "conversion" happened!?

André

0 Kudos
014
Contributor
Contributor
Jump to solution

There is a performance behavior I have tested related to the virtualization of sockets and cores. The ESX host physically has two sockets with HT capable quad-cores resulting in 16 logical processors. During my test, only the VM I was testing existed on the ESX host. Here is what I found.

  • A VM with 2 sockets and 8 cores makes 16 threads for the OS. This setup uses only 50% of the CPU power of the ESX host.
  • A VM with 2 sockets and 4 cores makes 8 threads for the OS. This setup uses 100% of the CPU power of the ESX host.

My question in the OP was already answered. I just thought this was worthwhile noting since you mentioned performance.

0 Kudos
Buck1967
Contributor
Contributor
Jump to solution

You wrote:

  • A VM with 2 sockets and 8 cores makes 16 threads for the OS. This setup uses only 50% of the CPU power of the ESX host.
  • A VM with 2 sockets and 4 cores makes 8 threads for the OS. This setup uses 100% of the CPU power of the ESX host.

that is interesting, but the questions is was there actually more work getting done?

0 Kudos
014
Contributor
Contributor
Jump to solution

No, there wasn't more work getting done. I was testing different setups to try to get the most folding done in the shortest amount of time. The VM's CPU usage was 100% in both cases, but the host's was not.

0 Kudos