Reply to Message

View discussion in a popup

Replying to:
jskaznik
Contributor
Contributor

Thank you Valentin!

I was concerned about the case I've had a while back. We have an dual socket 20C/socket ESXi server with 4 MS SQL servers configured for 8vCPUs each, running DVDStore benchmark. With default Numa.LocalityWeightActionAffinity of 130, I've observed that 3 out of 4 VMs were scheduled on one NUMA node and 1 on the other and it did not change during the benchmark run. I've also rebooted the VMs and rerun the test, but it they were always scheduled 3:1 between NUMA nodes. This resulted in performance imbalance as well as the combined benchmark results (IOPS) from all 4 VMs was impacted. When the setting was changed to 0, the VMs were balanced between NUMA nodes (2 per node) without CPU congestion - CPUready was low, the benchmark (IOPS) results were within 1% between the VMs and the combined IOPS value from all VMs was also higher. 

 

Reply
0 Kudos