Ardaneh
Enthusiast
Enthusiast

In my point of view, the only reason that you are using the remote memory is that there is no NUMA node exposed to your guest os and you are using more capacity than one NUMA node. if you are using Microsoft Windows you can check your NUMA configuration by using "Coreinfo" tools from Sysinternals. from VM level, you can check vmware.log file inside your VM folder ( "cat /vmfs/volumes/YOUR VOLUME NAME/YOUR VM NAME/vmware.log | grep -i vpd") and in there you must have more than one VPD, otherwise, there is no NUMA node exposed to your VM.

The recommendation is to create a VM with "N" vCPUs and 1 "Cores per socket" unless you have some different reasons (Licensing or exceeding the CPU limitation of 64 by windows)

The "Cores per socket" is only for licensing purposes, and if you increase the number of that value, you may face some performance issues (Many Domains with small resources) and your applications must be optimized themselves by that kind of configuration. So I recommend you to consider these and test:

- Disable sub-Numa clustering

- Create a VM with N vCPUs and 1 "cores per socket" (There will be no NUMA node if less than 9 vCPUs assigned to the VM or does not exceed the number of your physical cores per socket)

- If you have a VM with a large amount of memory (more than memory capacity of a NUMA node) and your workload is not CPU intensive, you can use "numa.PreferHT" configuration to put your VM into one NUMA node (in this case, your VM will not use the remote memory)

- from socket perspective, when you are using vSphere 6.5+ (you are using 6.7 as you mentioned), having more than 1 "cores per socket" will not affect the NUMA configuration of Guest OS (for example if you have 12 vCPUs and 2 cores per socket), but if you have a cache intensive workload or a smart application that can use CPU cache, you should increase the number of cores per socket (for example 16 vCPUs and 8 cores per socket)

I hope this could be helpful

Reply
0 Kudos