I just realized today that all my virtual machines have multiple single-core sockets instead of 1-2 multi-core sockets. I didn't set them up this way. It must have happened when I upgraded ESX 4.1 to 5.0, or when I upgraded the virtual hardware version from 7 to 8. I'm not quite sure.
Here's an example. I set up a VM with two quad-cores. These results in 8 threads available to the OS.
Sockets: 2
Cores: 4
But now that same machine is set up like this without my doing (still 8 threads for the OS):
Sockets: 8
Cores: 1
Does anyone know why this happened?
It's virtualized NUMA. While it doesn't impact performance, it has a huge impact on many things where per-core licensing comes into play. In v4, if you wanted to give a VM 8 vCPU's, that meant 8 sockets, which in many cases would mean having to license 8 sockets. Window Server 08R2 standard, for example, is limited to 4 sockets, but unlimited cores. This meant in v4, you have a limit of 4 VCPUs. Now that limit is gone. My company has already seen big benefits with this. Before this, particularly if the software in question was particularly draconian/expensive with socket licensing, it would sometimes make more sense to go physical, purely because with a simple 2 socket physical system you can easily get 12+ cores.
It's virtualized NUMA. While it doesn't impact performance, it has a huge impact on many things where per-core licensing comes into play. In v4, if you wanted to give a VM 8 vCPU's, that meant 8 sockets, which in many cases would mean having to license 8 sockets. Window Server 08R2 standard, for example, is limited to 4 sockets, but unlimited cores. This meant in v4, you have a limit of 4 VCPUs. Now that limit is gone. My company has already seen big benefits with this. Before this, particularly if the software in question was particularly draconian/expensive with socket licensing, it would sometimes make more sense to go physical, purely because with a simple 2 socket physical system you can easily get 12+ cores.
Not sure what is causing this. However, if the VM's log files have not already rolled over, you may download them to find out the when exactly the "conversion" happened!?
André
There is a performance behavior I have tested related to the virtualization of sockets and cores. The ESX host physically has two sockets with HT capable quad-cores resulting in 16 logical processors. During my test, only the VM I was testing existed on the ESX host. Here is what I found.
My question in the OP was already answered. I just thought this was worthwhile noting since you mentioned performance.
You wrote:
that is interesting, but the questions is was there actually more work getting done?
No, there wasn't more work getting done. I was testing different setups to try to get the most folding done in the shortest amount of time. The VM's CPU usage was 100% in both cases, but the host's was not.