1. Does this warning mean ESXi would not be able to pass this device through to a VM like I've been doing under 6.7?
The solution is to download the .vib driver of the unrecognized device and insert it inside the installation of ESXi 7.0
You can do it through a procedure similar to this (but you do it for Esxi 7):
2. Is there any hope of ESXi supporting this in the future, or does it generally mean they've dropped support for this device now?
Keep in mind that if your drivers aren't released in ".vib", you can't do anything. You need to upgrade your hardware or keep Esxi 6.7
Thanks Alessandro, I had not heard about this driver injection method before.
So are you saying I could download a driver that was originally intended for 6.7, and use it in 7.0?
I believe this is the page for my device:
I can download a vib file from the top link (scsi-mpt2sas-20.00.01.00-1OEM.5126.96.36.1991820.x86_64.vib), but they don't list ESXi 7.0 on this page. Are they just indicating which ESXi versions already include this driver?
Thanks for your help!
That page shows a driver type of vmklinux, there’s no support for such drivers in ESXi 7
Surely that message can be ignored for devices you're going to pass-through. That ESXi doesn't have drivers for the card aren't relevant.
Thanks Scott, you're right, I missed that. I was a little confused here, because I ran the command "esxcli system module list | grep vmklinux" on my 6.7 install, and it came up with nothing. Apparently the module should show up there if you're using any vmklinux drivers. But the reply from vmwph has made me think this is because a passed through device may not actually need a driver/vib under ESXi. The closest info I could find on this was related to sharing GPUs between VMs versus passing them through to a single VM - apparently you only need a vib if you're sharing them. I'm guessing the same likely applies to my SAS card.
I do see some LSI/SAS output using the CLI, but I'm not sure that's actually for 'drivers' as such.
[root@ESXi:~] esxcli system module list | grep vmklinux [root@ESXi:~] [root@ESXi:~] esxcli software vib list | grep lsi lsi-mr3 7.708.07.00-3vmw.6188.8.131.5220388 VMW VMwareCertified 2019-10-23 lsi-msgpt2 20.00.06.00-2vmw.6184.108.40.20620388 VMW VMwareCertified 2019-10-23 lsi-msgpt35 09.00.00.00-5vmw.6220.127.116.1120388 VMW VMwareCertified 2019-10-23 lsi-msgpt3 17.00.02.00-1vmw.618.104.22.16820388 VMW VMwareCertified 2019-10-23 lsu-lsi-drivers-plugin 1.0.0-1vmw.622.214.171.12406603 VMware VMwareCertified 2019-10-23 lsu-lsi-lsi-mr3-plugin 1.0.0-13vmw.6126.96.36.19902608 VMware VMwareCertified 2019-10-23 lsu-lsi-lsi-msgpt3-plugin 1.0.0-9vmw.6188.8.131.5206603 VMware VMwareCertified 2019-10-23 lsu-lsi-megaraid-sas-plugin 1.0.0-9vmw.6184.108.40.20669922 VMware VMwareCertified 2019-10-23 lsu-lsi-mpt2sas-plugin 2.0.0-7vmw.6220.127.116.1169922 VMware VMwareCertified 2019-10-23 [root@ESXi:~] esxcli software vib list | grep sas scsi-megaraid-sas 6.603.55.00-2vmw.618.104.22.16869922 VMW VMwareCertified 2019-10-23 scsi-mpt2sas 19.00.00.00-2vmw.622.214.171.12469922 VMW VMwareCertified 2019-10-23 scsi-mptsas 4.23.01.00-10vmw.6126.96.36.19969922 VMW VMwareCertified 2019-10-23 lsu-lsi-megaraid-sas-plugin 1.0.0-9vmw.6188.8.131.5269922 VMware VMwareCertified 2019-10-23 lsu-lsi-mpt2sas-plugin 2.0.0-7vmw.6184.108.40.20669922 VMware VMwareCertified 2019-10-23
It looks like ESXi has a built-in recovery mode that lets you rollback to your previous install. Given that, I think I'll just give it a go and cross my fingers...
Thanks for your help!
1 person found this helpful
For anyone else going down this road, here's where I ended up....
I ran the ESXi 7.0 upgrade via iso, ignored the 'unsupported hardware' warning, and the install completed successfully.
After logging into ESXi, all my VMs had started successfully... expect one. My FreeNAS VM had not started, and had a vague error about a PCI device. Clearly the passthrough HBA was not working.
I went to Host/Manage/Hardware and the HBA was showing up with the correct name. However, its passthrough setting had been switched to inactive. I toggled it to active, but there was an error in addition to the usual "Reboot required" message. I figured it didn't work, but I tried rebooting anyway. After reboot, the device now showed as passthrough active. Promising!
I then tried to start the FreeNAS VM, but no luck - another error. I found a support article for this one, and it said when this happens to remove and re-add the device. I tried that, and the VM booted and worked no problem!
So, a bit of a bumpy road, but success, ESXi 7.0 is now running great!
Thanks for your help
Basically, once I've configured my AMD GPU properly for passthrough, the settings don't persist through host reboots, and I have to use ESXi Host client (NOT vSphere Client) to toggle them off then on again, and tada, no 2nd reboot needed, the GPU mapped VM can now boot up without issue.
Came from here (Im the OP on this Thread)
and i've saw the Link to the Blog tinkertry and followed the link to this Thread :-)
Just in case, i've created a Support Ticket within the Pacific Beta for this strange GPU Passtrough behavior (Ticket Number: 20105602002 ) and gave logs, altough there wasn't anything strange for the support to see, but it was between the Beta testing and then the GA Version and the support told me to await the GA Version (which still has this bug in).
The Problem for me, was that after every reboot, the GPU Device is lost from the passtrough entry (aswell as the entry in the ah-trees.conf). I have tried to move a backup of the ah-trees.conf file with all Devices passtroughed to its' original destination but it seems my skript within rc.local doesn't run.
The HDMI Device always stays passtroughed. And while the ESXi is booted i can just disable and enable the passtrough for the whole GPU (RX 5700 XT and RX480 and RX580, altough the RX 5700 XT still has the GPU Bug, but that lies within AMD to fix it in their CPU Microcode)
Then there is the "Bug" where VMs with passtroughed USB PCIe Cards won't boot if on the same USB Card Cables or anything is attached (The VM won't even show the VMware Bios Bootlogo).
Workaround: detach everything from this USB PCIe Card before you even think to start the VM and then power up your VM and then connect your USB Cables.
For now i have downgraded one host back to 6.7 U3 where both problems don't exist (One VM on this Host is used for VR Stuff).
My other Host with this passtrough configuration is still on 7.0 GA just because here i have better access to the USB PCIe Card and the PGU-Passtrough isn't buggy (other Mainboard). Maybe a new ESXi Update will fix this.
Gladly, I stumbled upon a work-around that's sticks/doesn't have to be re-done with every ESXi host reboot. I've added the details and warning here:
Unfortunately, over 1,600 folks have apparently viewed my previously suggested (non-sticky) toggle workaround video, which would seem to indicate a lot of people have been affected by this issue. I cannot think of any reason somebody would watch that video unless they were having this same issue. While I'm glad to have helped, there are clearly still more issues with passthrough that people in this thread have experienced. I really appreciate it when folks take the time to open a ticket on a currently supported for ESXi 7.0 system, a situation I'm not currently in unfortunately, in these early days since 7.0 release. If somebody does manage to get these sort of issues reported to VMware, please let us know here. Thank you!