I have a cluster of 4 ESXi nodes used for the VDI desktops. 2 out of the 4 hosts have vGPU hardware cards. When I create a pool for vGPU, I'm only able to select the cluster object to deploy against (i.e. the individual ESXi servers don't show). When the pool is being created, ESXi hosts within the cluster are randomly chosen. If a desktop lands on one of the ESXi hosts without a GPU card, the desktop pool will disable provisioning (because the vm has a vGPU profile and is looking to be deployed on an ESXi host that has a GPU card installed).
I have to vmotion the vm (stuck at customizing and powered off) to an ESXi host with a GPU card and then re-enable provisioning on the pool. It then completes.
This is obviously not the way we want the vGPU pool deployed. My question is how can we specify during pool creation which ESXi servers within the cluster that we only want the desktops to be deployed against. Once again, only the cluster object is shown and available to be selected during pool creation.
If each host has its own storage set the pool to use that local storage and all VMs will be there. I'm not sure how to set VMs per host if the storage is shared, curious to see what other say.
Storage is shared as that's pretty common :-). You'd think VMware would simply allow one to pick either the cluster object or individual ESXi servers doing pool creation. Doesn't make sense that they don't allow for that - especially now that vGPU is more common, and customers usually don't install GPU cards in every server within the cluster.
I completely agree with you but VMware sees clusters as groupings of hosts with identical hardware. It sounds like you have two variations of hardware and need to break it out into two clusters (One with GPU and one without) based on their design. Alternatively you will need to keep doing what you are doing. You could work with your account team/SE or submit a feature request for them to possibly change this.