I have read about many people using Passthrough and I have been experimenting heavily myself. I've gotten variously a RAID, several gpus, and 2 USB cards (one which looked like 4 separate PCI addresses) all to work properly.
The challenge I'm having is that if I load other VMs - then for any reason I close the VM with passthrough, I will not be able to get the passthrough to work again. If it's the GPU being passed through the VM may stop working properly at all.
When I use Fusion as the VMRC, it helpfully floats a message up: "Address Unresolvable".
In my mind I was guessing that the memory locations that the kernel needs for direct access lay in specific areas. But the hypervisor doesn't seem to reserve those, so if any other VM launches first, it can cause the right place for accessing the card to be unavailable.
But that's a guess.
Is anyone else experiencing this? If we really want to use Passthrough, this needs to get fixed. And with the linux compatible driver model gone, this is going to be more and more important for those who want to use ESX with anything less than very mainstream hardware.
For example, I'm looking at using this high performance NVME raid that uses an nvidea GPU for the processor. I spoke to them and ESXi is not on the roadmap due to the driver changes; Linux was fully supported. Although this is pretty niche (but great for storage), this is true for the vast majority of cards and devices. Perhaps not the majority we'd run on vsphere. But "plenty".
Anyway just curious if others are doing the same thing. What I'm doing now is launching those passthrough VMs first in local.sh, then vcenter and the other machines can come up.