- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Clearly these servers weren't bought for this specific purpose - 8 doesn't go into 10 (or 20) very well, and 32 doesn't go into 80 very well either!
That doesn't really matter, this gives the CPU scheduler more flexibility to schedule vCPUs and host processes.
So lets say i spin up 4 VMs on this host. Presumably they will utilise 32 of the 40 logical procs (8vCPU x 4VMs) and 128GB (32GB x 4VMs) of the 160GB of RAM. I'm guessing ESXi will spread these across both NUMA nodes, so there will be 2x VMs on each socket, which leaves 4 logical processors and 16GB pRAM 'unused' on each of the two NUMA nodes.
They should, but unfortunately it's not so easy. The main objective is to keep memory local, not to evenly spread the raw number of vCPUs/VMs across NUMA nodes. There is no guarantee your 4 VMs with be places like that, but if you have 80GB RAM per NUMA node, then it will likely end up like this.
If there is enough memory per node, they may all end up on the same NUMA node if the actual CPU utilization is low. Note that not the assigned memory, but the used memory counts here (unless you reserve all memory). They can be migrated by the scheduler to another NUMA node though (migrations are indicated by NMIG counter in (r)esxtop, but personally I've never seen a value other than 0).
If i spin up a 5th VM of the same spec, there is enough physical resource to satsify it however both CPU and RAM will be split equally between the two NUMA nodes right? 4xCPU and 16GB from each to make up the 8xvCPU and 32GB VM.
No, the the single 5th VM will not be split across both NUMA nodes. This is known as wide-NUMA and is by default only enabled when a VM has more than 8 (read: at least 9) vCPUs or if you exceed the physical core count of one CPU. The VM will be placed just like the others with all its vCPUs on one node, and if there is not enough free memory on the host it will use remote memory from another node, maybe also for the other VMs. This minimum can be adjusted with the numa.vcpu.maxPerMachineNode and numa.vcpu.min advanced parameters:
You can play around a bit by spinning up VMs on your host and check the actual NUMA stats in (r)esxtop.
Presumably i am going to see some performance degredation on this VM that spans?
Is this still measurable and noticeable on a modern host such as this?
It depends on the workload. Spanning a VM across NUMA nodes and presenting an according virtual NUMA topology to the guest has notable advantages, like a higher memory bandwidth (2 memory buses to use). But for some workloads it can be disadvantageous, especially for CPU cache intensive workloads.
I don't think it will really matter for a Terminalserver/XenApp workload though, as it's typically comprised of a multitude of different applications with dynamic workload characteristics.
Here is some more info on NUMA spanning in conjunction with an advanced parameter that forces using HT threads of the local CPU instead of using physical cores on another NUMA node:
https://blogs.vmware.com/vsphere/2014/03/perferht-use-2.html
NUMA is about memory right? So if i make sure each socket has enough RAM to satisfy 3 (instead of 2) VMs worth of workload on it, then potentially I could have 3 VMs localised on each NUMA node, with a modicum of over-comittment on the CPU (24vCPUs running on 20 logical CPUs)?
I don't think the performance penalty of remote memory will be really noticeable in your case, but yes, it would be better if you increased the host physical memory to accommodate 3 VMs in this case. In my and many others experiences, you are more likely be constrained by RAM than by CPU resources.
Also check out these great articles on the general topic of NUMA:
http://frankdenneman.nl/2010/09/13/esx-4-1-numa-scheduling/
Sizing VMs and NUMA nodes - frankdenneman.nl
http://frankdenneman.nl/2010/10/07/numa-hyperthreading-and-numa-preferht/
http://www.datacenterdan.com/blog/vsphere-55-bpperformance02-numa-alignment
I hope this answers your questions so far, if you need some clarification or if you have other questions let me know.