Well, that adapter isn't certified for any release to begin with, so you'll have to take what you can get.
I understand but since it is purely a software issue is was worth the shot asking if someone knew how to fix it.
I've noticed issues as well with 6.7 when I had things working with 6.5+
Just a quick note of progress with 6.7...
I tried a couple different hosts with different NVIDIA cards (1030, 1080ti) and got the same results, after seeing the card initialize it would shut down/crash. From that point forward the dreaded error code 43 on device properties. I read many posts in the past that said 43 was an intentional disable in software when the driver detected it was running in a VM. So it got me thinking...
I always made sure that I had "hypervisor.cpuid.v0=FALSE" in my config before ever passing through the video card. But something changed in 6.7, or maybe Nvidia is looknig for something else?
So I tried this:
Build a Windows10 VM (v1809)
don't install vmware tools
install Chocolatey (https:\\chocolatey.org) For easier install of teamviewer and nvidia drivers
install teamviewer via chocolatey (so I can connect remotely) *also I didn't have usb hardware that I could passthrough
disable svga adapter "svga.present=FALSE"
Add PCI devices: 1080TI, Audio Device
Ok so the start was not pretty as windows detects video card and you need to reboot after initial install and you don't have vmware tools installed so its a cringe-worthy moment.
but alas after the reboot and the driver took, the output looked stable.
after that I installed the latest nvidia drivers and all seems ok.
So, what was it? svga present? vmware tools?
If I had to guess it might be svga, because I saw a similar issue with Ubuntu 18.10. I was having problems getting the nvidia driver to work in Ubuntu so on a fluke I disabled the svga driver and voila! Ubuntu 18.10 on GPU!
I may install vmware tools and see if that makes a difference.
Anyways, I wanted to throw this out there for all the folks suffering with this issue. If it's Nvidias doing then I suppose it'll just be a matter of time for them to find another way.
My AMD GPU card dont like shutdown. I can reboot Windows, but when I shutdown only windows VM, then it dont boot up without ESXi reboot.
I saw this behavior as well after some time with a GTX 1080ti. And later also with RTX 2080. I could run the windows VM only once without problems, but as soon as I shutdown/rebooted then I saw error 43. On the other hand I had an ubuntu 18.10 vm that could be cycled without issues. But even once the Windows vm was started and shutdown, the ubuntu vm wouldn’t work after that either. The only way to get it back working was a reboot of ESXI. So it appeared that the card was left in some state that was not allowing it to start properly after that. I looked into some of the PCIE bus reset methods, but only tried D3D0 other than default bridge mode. But no change.
Then I read some articles online where others were having similar issues. They reported one workaround was to disable the GPU device in Device Manager before a shutdown, and then re-enable it when the vm was started again. It sounded ugly but I gave it a try and low and behold it worked. I was able to startup/shutdown the vm with no more error code 43’s. I found a way using devcon.exe? to automate this process as a startup/shutdown script so it winds up being painless and automatic. So I don’t have to remember to do it manually and it just works now.
That might be something worth trying for you. I’m running 6.7.0 update 1 with latest patch.
He-hee, yes, it really sounds ugly, because when windows starts without drivers, it push all icons upside-down and also its not nice for eyes to see. I have Windows 8.1. Reboot works without problems, but only when shutting windows down, then later I must also reboot host. But its not problem for me, because usually I dont need to shut down windows. When I want to shut down computer (host), I just push shutdown button and it cleanly first shuts down windows (set to do so in "VM Startup/Shutdown" automaic) and when I start computer, it boot up also windows. Boot process is only little longer, no problem. The only moment when I need to just shutdown windows, is when I want delete or make snapshots. As passthrough dont allow online snapshots and suspends. Then I use phone app ("Whatchlist" and "vmwPAD") to connect to vCenter and work with snapshots. Of course other computer can also be used for this, but phone app is very comfort. After that I make reboot to host (with the same phone app) and Windows is soon again up.
My GTX 1660Ti doesn't like reboot on 6.7u1 either.
It works like a charm on 6.5u2.
Tried all reset methods but no lucky.
Did you try 6.7u2? I have same problem @ 6.7u1. After initial boot the host, the gpu (GTX 1660) can passthrough to win 10 guest without any problem. After the guest reboot / shutdown, and boot again the gpu passthrough is not work and display code 43 error. I have quadro p2000 and I discovery that after the guest reboot / shutdownm, the p2000 fan will run at full speed. I think it is the pci reset work on p2000. Unfortunately, the consumer gpu can't passthrough due to incorrect pci reset. Is it a bug on 6.7? Have someone report to VMWare?
Also, I notice that the esxi is skipping to reset my gpu.
2019-05-19T15:59:54.818Z cpu0:2099331)PCI: 967: Skipping device reset on 0000:01:00.0 because PCIe link to the device is down.
2019-05-19T15:59:54.818Z cpu0:2099331)IOMMU: 2502: Device 0000:01:00.0 placed in new domain 0x430430ac2d10.
2019-05-19T15:59:54.818Z cpu0:2099331)PCI: 967: Skipping device reset on 0000:01:00.1 because PCIe link to the device is down.
2019-05-19T15:59:54.818Z cpu0:2099331)PCI: 967: Skipping device reset on 0000:01:00.2 because PCIe link to the device is down.
2019-05-19T15:59:54.818Z cpu0:2099331)PCI: 967: Skipping device reset on 0000:01:00.3 because PCIe link to the device is down.
GTX series cards are consumer cards and therefore not supported anyway.
I know, but 6.5 it is work. So vmware change it to make consumer gpu not work?
Even in 6.5 it wasn't supported. When you use unsupported hardware it can stop working at any time. Because it worked in 6.5 doesn't mean it'll work in later versions. It may. Then again, it may not.
Hey, I have a new idea. I have see some of people can passthrough their gtx gpu at esxi 6.7. But their cpu is xeon or the motherboard is using x99 chipset. Are you using consumer verson cpu or motherboard? Maybe the problem is the motherboard and cpu support. The pci reset method also need cpu or chipset support?? I am using i5-6500 and ASUS B150M-A/M.2 and try to passthrough GTX 1660 with no luck to keep it works after vm reboot. I want to try the x299 with i9-9900x. If it is works, I will report to you. If not, I will move on kvm.
Just go back and try latest 6.5, it works like a charm.
Hey guys, I have test esxi 6.7u2 on i9-9900x, ASUS WS X299 PRO/SE and GTX 1660. The GPU passthrough can works and no more code 43 after reboot problem. I think the key point is server grade motherboard or no iGPU within the cpu? Also, I have noted that the people whose susses to passthrough in 6.7 are using xeon cpu. I hope it can help you.