I'm working on a project where we are replacing FAT clients with VDI at a relatively low density (8 to 12 concurrent Virtual Desktops per server). Please understand this low density is best suited for the user.
We have specified Servers w/ 2 Xeon processors, 196 GB Memory, High Performance SAS SSDs, dual NVidia GPUs (Tesla M10s), Dual 10GB Nics, Dual PCoIP Accelerators, etc.
The server platform's PCI Express slots are allocated to the Processor Sockets in a way that allow us to split the GPUs, 10 GB NICs, and PCoIP Accelerators evenly between the Processors.
Ideally, with the exception of storage, every VM would utilize NUMA local hardware.
I know that I could modify each linked clone's vmx file to specify CPU/memory affinity, network (either SR-IOV or vswitch), etc. However, I understand that recomposing the VM would likely wipe out any linked clone specific hardware settings. We definitely want to take advantage of linked clones for patching/updates. Also, VDI migration/vMotion is not a requirement.
So my (loaded) questions are:
Can ESXi be configured to take resource placement (locality of vGPUs, NICs, etc) into consideration when placing/starting VMs?
If so, what configuration parameters dictate that?
It would be a shame if VMs running on NUMA Node A ended up using the GPU and NICs on NUMA Node B, and vice-versa.
Thank you so very much for your time and thoughts.
-- Follow-up --
I just want to point out that the XenServer can be configured to allocate NVidia GPUs based on the NUMA node.