1. Does this warning mean ESXi would not be able to pass this device through to a VM like I've been doing under 6.7?
The solution is to download the .vib driver of the unrecognized device and insert it inside the installation of ESXi 7.0
You can do it through a procedure similar to this (but you do it for Esxi 7):
2. Is there any hope of ESXi supporting this in the future, or does it generally mean they've dropped support for this device now?
Keep in mind that if your drivers aren't released in ".vib", you can't do anything. You need to upgrade your hardware or keep Esxi 6.7
Thanks Alessandro, I had not heard about this driver injection method before.
So are you saying I could download a driver that was originally intended for 6.7, and use it in 7.0?
I believe this is the page for my device:
I can download a vib file from the top link (scsi-mpt2sas-20.00.01.00-1OEM.5188.8.131.521820.x86_64.vib), but they don't list ESXi 7.0 on this page. Are they just indicating which ESXi versions already include this driver?
Thanks for your help!
That page shows a driver type of vmklinux, there’s no support for such drivers in ESXi 7
Surely that message can be ignored for devices you're going to pass-through. That ESXi doesn't have drivers for the card aren't relevant.
Thanks Scott, you're right, I missed that. I was a little confused here, because I ran the command "esxcli system module list | grep vmklinux" on my 6.7 install, and it came up with nothing. Apparently the module should show up there if you're using any vmklinux drivers. But the reply from vmwph has made me think this is because a passed through device may not actually need a driver/vib under ESXi. The closest info I could find on this was related to sharing GPUs between VMs versus passing them through to a single VM - apparently you only need a vib if you're sharing them. I'm guessing the same likely applies to my SAS card.
I do see some LSI/SAS output using the CLI, but I'm not sure that's actually for 'drivers' as such.
[root@ESXi:~] esxcli system module list | grep vmklinux [root@ESXi:~] [root@ESXi:~] esxcli software vib list | grep lsi lsi-mr3 7.708.07.00-3vmw.6184.108.40.20620388 VMW VMwareCertified 2019-10-23 lsi-msgpt2 20.00.06.00-2vmw.6220.127.116.1120388 VMW VMwareCertified 2019-10-23 lsi-msgpt35 09.00.00.00-5vmw.618.104.22.16820388 VMW VMwareCertified 2019-10-23 lsi-msgpt3 17.00.02.00-1vmw.622.214.171.12420388 VMW VMwareCertified 2019-10-23 lsu-lsi-drivers-plugin 1.0.0-1vmw.6126.96.36.19906603 VMware VMwareCertified 2019-10-23 lsu-lsi-lsi-mr3-plugin 1.0.0-13vmw.6188.8.131.5202608 VMware VMwareCertified 2019-10-23 lsu-lsi-lsi-msgpt3-plugin 1.0.0-9vmw.6184.108.40.20606603 VMware VMwareCertified 2019-10-23 lsu-lsi-megaraid-sas-plugin 1.0.0-9vmw.6220.127.116.1169922 VMware VMwareCertified 2019-10-23 lsu-lsi-mpt2sas-plugin 2.0.0-7vmw.618.104.22.16869922 VMware VMwareCertified 2019-10-23 [root@ESXi:~] esxcli software vib list | grep sas scsi-megaraid-sas 6.603.55.00-2vmw.622.214.171.12469922 VMW VMwareCertified 2019-10-23 scsi-mpt2sas 19.00.00.00-2vmw.6126.96.36.19969922 VMW VMwareCertified 2019-10-23 scsi-mptsas 4.23.01.00-10vmw.6188.8.131.5269922 VMW VMwareCertified 2019-10-23 lsu-lsi-megaraid-sas-plugin 1.0.0-9vmw.6184.108.40.20669922 VMware VMwareCertified 2019-10-23 lsu-lsi-mpt2sas-plugin 2.0.0-7vmw.6220.127.116.1169922 VMware VMwareCertified 2019-10-23
It looks like ESXi has a built-in recovery mode that lets you rollback to your previous install. Given that, I think I'll just give it a go and cross my fingers...
Thanks for your help!
1 person found this helpful
For anyone else going down this road, here's where I ended up....
I ran the ESXi 7.0 upgrade via iso, ignored the 'unsupported hardware' warning, and the install completed successfully.
After logging into ESXi, all my VMs had started successfully... expect one. My FreeNAS VM had not started, and had a vague error about a PCI device. Clearly the passthrough HBA was not working.
I went to Host/Manage/Hardware and the HBA was showing up with the correct name. However, its passthrough setting had been switched to inactive. I toggled it to active, but there was an error in addition to the usual "Reboot required" message. I figured it didn't work, but I tried rebooting anyway. After reboot, the device now showed as passthrough active. Promising!
I then tried to start the FreeNAS VM, but no luck - another error. I found a support article for this one, and it said when this happens to remove and re-add the device. I tried that, and the VM booted and worked no problem!
So, a bit of a bumpy road, but success, ESXi 7.0 is now running great!
Thanks for your help
Basically, once I've configured my AMD GPU properly for passthrough, the settings don't persist through host reboots, and I have to use ESXi Host client (NOT vSphere Client) to toggle them off then on again, and tada, no 2nd reboot needed, the GPU mapped VM can now boot up without issue.