I have a box with 2 CPU socket (2*40 cores), running CentOS. I'd like to assign the 1st CPU socket for VMs. Is this doable with VMWARE Workstation?
TIA, Vitaly
If the host machine can support NUMA, it is better to turn on the NUMA in the host UEFI/BIOS.
From the log, it appears VMware Workstation can see the NUMA nodes.
| vmx| I125: NUMA node 0: 15973MB, cpus 0x00000f0f 0x00000000 0x00000000 0x00000000
| vmx| I125: NUMA node 1: 16101MB, cpus 0x0000f0f0 0x00000000 0x00000000 0x00000000
But the VM will not see the NUMA architecture. I don't think Workstation Pro can take advantage of NUMA directly. For that you need ESXi. ESXi also allow you to assign affinity.
Running Prime95 stress test in a 6 vCPU Windows 10 VM in an Ubuntu 18.04, NUMA enabled 2 CPU sockets host and Workstation Pro 15.5.x, the System Monitor on the host will show 100% for 6 CPUx. Sometimes one host core/thread will flip to a different host CPU core/thread but it will stay at 100% for some time. It does not keep jumping around.
Probably the largest determinant whether a vCPU will be "sticky" to a physical CPU core/thread depends on execution profile of the application(s) running in the VM. If the application causes a lot of VM-EXITs (such disk I/O or network I/O), assigning affinity may not add so much to performance.
Using the Prime95 stress test example, it is compute intensive so it is unlikely to have frequent VM-EXIT so it can be somewhat sticky to a host core/thread unless it needs to read/write memory not in cache or read disk/IO and thus causing VM-EXIT and possible switch to a different core after a VM-ENTRY. I am assuming with NUMA turned on and the somewhat sticky vCPU-host CPU association, it is better than having non-NUMA.
There's nothing in Workstation itself which will do that.
I found this hack which involves running a script once you have all your VMs running, to tell the Windows host OS where to run the processes which support each VM: https://houseofbrick.com/troubleshooting-vmware-workstation-at-100-cpu/
Thank you! Sorry, I didn't mention that I have CentoOS as host OS.
If the host machine can support NUMA, it is better to turn on the NUMA in the host UEFI/BIOS.
From the log, it appears VMware Workstation can see the NUMA nodes.
| vmx| I125: NUMA node 0: 15973MB, cpus 0x00000f0f 0x00000000 0x00000000 0x00000000
| vmx| I125: NUMA node 1: 16101MB, cpus 0x0000f0f0 0x00000000 0x00000000 0x00000000
But the VM will not see the NUMA architecture. I don't think Workstation Pro can take advantage of NUMA directly. For that you need ESXi. ESXi also allow you to assign affinity.
Running Prime95 stress test in a 6 vCPU Windows 10 VM in an Ubuntu 18.04, NUMA enabled 2 CPU sockets host and Workstation Pro 15.5.x, the System Monitor on the host will show 100% for 6 CPUx. Sometimes one host core/thread will flip to a different host CPU core/thread but it will stay at 100% for some time. It does not keep jumping around.
Probably the largest determinant whether a vCPU will be "sticky" to a physical CPU core/thread depends on execution profile of the application(s) running in the VM. If the application causes a lot of VM-EXITs (such disk I/O or network I/O), assigning affinity may not add so much to performance.
Using the Prime95 stress test example, it is compute intensive so it is unlikely to have frequent VM-EXIT so it can be somewhat sticky to a host core/thread unless it needs to read/write memory not in cache or read disk/IO and thus causing VM-EXIT and possible switch to a different core after a VM-ENTRY. I am assuming with NUMA turned on and the somewhat sticky vCPU-host CPU association, it is better than having non-NUMA.
Just for reference, I don't know if this still works, but Tie the VM to a cpu? mentions setting .vmx affinity values like
processor0.use= "TRUE"
processor1.use= "FALSE"
etc.