VMware Cloud Community
feter20
Enthusiast
Enthusiast
Jump to solution

to enable basic NUMA to optimize performance

just searched documents including https://docs.vmware.com/en/VMware-vSphere/6.7/vsphere-esxi-vcenter-server-67-resource-management-gui...  and https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere-esxi... to understand "non-uniform memory access".

I'd like to know the correct steps to utilize the basic NUMA on ESXi 6.7 to optimize the VM performance.

assume that i got an environment that indeed need to use NUMA, which included a cluster of ESXi hosts and some VMs of high virtual hardware(say 16 vCPU and 48GB vRAM).

if i wanna configure the ESXi hosts to automatically do the NUMA scheduling on these VMs, should I follow the procedure under:

1. check "interleaving memory" in each server's BIOS to make ESXi turn the node into NUMA node?

2. disable the VM resource pool(if it exist) on this ESXi host cluster because NUMA will allocate recourses?

3. now the clusters will do the NUMA scheduling automatically without the need to configure on each VM(no need to configure "numa.nodeAffinity" paremeters)?

additional doubt is, how to know whether an ESXi host has been enabled with NUMA to function or not? only via the BIOS?

(The topic "Specifying NUMA Controls" in official document described only three NUMA manual options of affinity for VMs and i guess this is based on an already enabled NUMA system...)

Tags (2)
1 Solution

Accepted Solutions
nachogonzalez
Commander
Commander
Jump to solution

I'm not sure about the bios setting

Regarding the other two questions you wont see any feature to enable or disable vNUMA on vCenter.
In the doc i've sent you before it explains that vNUMA takes into consideration the cores and sockets that a VM has.
So to toy arround with it you have to manually change those values.
Else the hypervisor will handle it by default.

View solution in original post

0 Kudos
7 Replies
nachogonzalez
Commander
Commander
Jump to solution

Hey, hope you are doing fine:

Here are the prerequisites for numa: Resource Management in NUMA Architectures

( be careful with node interleaving, since this doc explains it exactly the oposite as you did)

Regarding the resource pools:
you can use them, the vNUMA calculations will occur at the host level and the resource pool works at a cluster level.
Basically each host has a CPU scheduler which handles resource management it will work with the resource pools.
vNUMA is more a design factor that you need to take into consideration, but on the runtime it will not affect.


Now, the nice part: Last year I was working on a huge copany with over 30k ESXI 6.5 hosts, and we had a project to rightsize vNUMA nodes.
When we went to VMware with the plan to remediate based on Virtual Machine vCPU and vNUMA Rightsizing - Rules of Thumb - VMware VROOM! Performance Blog
Basically what they said was: Unless you have a specific requirement to modify vNUMA setting, you should let the esxi host decide how to place the memory per CPU core.

Hope this works, else let me know so i can assist you

feter20
Enthusiast
Enthusiast
Jump to solution

nachogonzalez​ that's so useful!

so it seemed that configure interleaving memory in the BIOS can enable NUMA for ESXi and it is the only setting.

but my questions popped up: does ESXi/vCenter display the NUMA feature on the web UI?

will all the VMs on host be manage by NUMA scheduling with no exception after this host is enabled with NUMA?

0 Kudos
nachogonzalez
Commander
Commander
Jump to solution

I'm not sure about the bios setting

Regarding the other two questions you wont see any feature to enable or disable vNUMA on vCenter.
In the doc i've sent you before it explains that vNUMA takes into consideration the cores and sockets that a VM has.
So to toy arround with it you have to manually change those values.
Else the hypervisor will handle it by default.

0 Kudos
feter20
Enthusiast
Enthusiast
Jump to solution

hold on, I found something:

according to the statement in Virtual Machine vCPU and vNUMA Rightsizing - Rules of Thumb - VMware VROOM! Performance Blog , a NUMA node consists of a CPU and its related memory within a host, not all the CPUs and memories within a host, is it?

therefore, if a host has only one CPU and its memory dimms, then this host only offer a NUMA node. If a host has two CPUs and their related memory dimms then this host offers two NUMA nodes?

is my understanding correct? a server can typically create 2 NUMA nodes? (according to figure below, which is from the document described above.)

pastedImage_1.png

0 Kudos
nachogonzalez
Commander
Commander
Jump to solution

Yes, your understanding is correct.
But please remember that after vSphere 6.5 the recomended setting for vNUMA is to leave the settings by default and let the hypervisor handle the vNUMA calculations.
Else you can manually distribute the numa nodes by selecting cores per socket.

feter20
Enthusiast
Enthusiast
Jump to solution

at the beginning I thought the NUMA architecture will span across each physical server to share different server's memory...

now I got it after checking some technical papers.

lol

thanks for all the answers! you are so  patient with me and also professional to my questions!Smiley Happy

nachogonzalez
Commander
Commander
Jump to solution

No problem at all

0 Kudos