We are doing some tests with a service that uses DPDK and following the VMWare guide for intensive data workload applications. So, set the latency sensitive field to high, total memory reservation, total CPU reservation, etc, etc...
Just have one question regarding CPU reservation to guarantee pCPU exclusivity for the vCPUs. Imagine we have a 4 vCPUs VM. But really, only 2 of them need to access in exclusivity to the pCPUs (lets say vCPU 0 and 1). For the vCPUs 2 and 3, is not needed.
We have a 2500 MHz CPU, with 2 CPUs, each one with 12 cores. Now we are setting the CPU reservation to 4*2500 = 10000 (remember, together with latency sensitivity high and so on). This way, we have the four vCPUs pinned to physical cores and also using them in exclusivity, in the same NUMA node.
If we put 5000 instead of 10000, will we have the vCPU 0 and 1 using exclusive physical cores or it will be that the four vCPUs have guaranteed 1250 MHz of CPU power, but can be in any physical core, and also, with no exclusivity?
If 5000 is not the right setting, how can we configure exclusivity per vCPU?
And also, is there any way in the vSphere hypervisor to see if a vCPU has exclusivity on a specificy pCPU?
CPU affinity doesn't guarantee exclusive access to a core. Other vCPU's can be scheduled on that same core. CPU affinity only limits the placement decisions.
What I would do is change the VM settings from 4 vCPU's to 2 vCPU's and set CPU reservation based on this formula;
CPU clock speed x number of vCPU's (= 2500 MHz x 2 vCPU's)
Be sure to keep at least 10% of CPU available for ESXi householding.
Thanks for the answer. But we can't do this because we need the 4 vCPUs.
The 2 first vCPUs are used for DPDK and so, you can't schedule other VM tasks (even Linux kernel/user tasks, as we isolate those vCPUs with isolcpu kernel parameter) on those vCPUs. And to have optimal performance, they need to have exclusive access to real physical CPUs in the host.
And we need the other 2 vCPUs for Linux tasks and other application tasks that are not DPDK related. But for these tasks, we don't need physical CPU exclusivity and we even don't need CPU time reservation. Those 2 other vCPUs can be shared with the vCPUs of other VMs.
Moderator: Moved to vMotion and Resource Management
OK i see...
The only way I can think of is settings CPU affinity for all VM's running on that host. Thats the only way you can be sure of vCPU to pCPU mapping.
Ok help me out a little bit, what is your definition of "CPU exclusivity"?
If you assign CPU reservation then that’s for all vCPU's. There is no way of giving vCPU0 40%, vCPU1 40% and the other vCPU's the remaining 10% of the CPU reservation.
In your setup you have a total of 24 cores. So I would try something like this;
|VM Name||vCPU||CPU affinity|
|DPDK||0 (inguest reserved for DPDK)||0|
|DPDK||1 (inguest reserved for DPDK)||1|
That is the only way I can come up with when making sure a core is dedicated (exclusivity) used by a vCPU.
For me CPU affinity is that the vCPU will always use an specific pCPU.
CPU exclusivity is that the vCPU is the only one that can use the pCPU. No other VM can. Exclusivity needs affinity, of course. But affinity does not mean exclusivity.
So, in the table, affinity is ok. But what is preventing VM1/VM2 to also use pCPU 0 or pCPU1?
I think that, when reserving 2500*4=10000 MHz of CPU for DPDK VM, vSphere will do also CPU exclusivity, so it will ensure that pCPUs 0 to 3 (for example) will not be used by any other VM or even the vSphere itself. At least when setting latency sensitivity = high.
So, my question is that this is wasting of resources, because what I really want is only 2 out of the 4 CPUs to be exclusive. So, vCPU0 uses pCPU0 in exclusivity, vCPU1 uses pCPU1 in exclusivity, But vCPU2 and 3 uses any pCPU without any kind of exclusivity. So VM1/VM2 can use pCPUs 2-23, but not pCPUs 0 and 1.
I don't know if that is possible. I know that I can assign affinity per vCPU... but I don't see the way to assign pCPUs exclusivity to some vCPUs of the VM.
I wouldn't use any affinity at the ESXi or VM level to do this - you would have to set that on every VM on the host (as per the table in the reply posted above) and would lose the ability to use vMotion or DRS on all the VMs.
Just build the VM with 4 CPUs and give it a maximum reservation of 10000 MHz.
The vCPUs of that VM will get scheduled across different pCPUs over time, but the reservation is as close to a guarantee as you can get that when the vCPUs of that VM request to be scheduled, ESXi will schedule them.
This should help: Beating a dead horse - using CPU affinity - frankdenneman.nl
The CPU affinity will prevent VM1 and VM2 from running on CPU0 and CPU1.
From quickly reading the performance guide, I guess you are right,
When you set the sensitivity to high you "disable" any resource rescheduling. So the vCPU is mapped to a pCPU can kept there, thus the 100% CPU reservation. On the memory side by reserving the entire VM memory, you address translation as the VM's pages are permanently mapped to specific physical memory addresses. Thus the address translation never need to be repeating over and over again and you get the best performance possible.