zwbee
Contributor
Contributor

Upgrade from 6.7 to 7.0 and unsupported hardware

Jump to solution

Hi,

I'm relatively new to ESXi. I've been running it as a home server for around 5 months, and so far it's been fantastic. I built my server on new hardware, including the following HBA which I pass through to a FreeNAS VM.

SAS 9210-8i Host Bus Adapter

Today I thought I'd try the upgrade to ESXi 7.0, but the installer brings up a warning saying my host has unsupported hardware and logs the following:

[pciinfo 1000:0072 1000:3040]

By the looks of it, this represents my HBA card: https://pci-ids.ucw.cz/read/PC/1000/0072/10003040

I've cancelled the upgrade for now. I'm hoping someone can help me with a couple of questions:

1. Does this warning mean ESXi would not be able to pass this device through to a VM like I've been doing under 6.7?

2. Is there any hope of ESXi supporting this in the future, or does it generally mean they've dropped support for this device now?

Any advice is much appreciated!

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
zwbee
Contributor
Contributor

For anyone else going down this road, here's where I ended up....

I ran the ESXi 7.0 upgrade via iso, ignored the 'unsupported hardware' warning, and the install completed successfully.

After logging into ESXi, all my VMs had started successfully... expect one. My FreeNAS VM had not started, and had a vague error about a PCI device. Clearly the passthrough HBA was not working.

I went to Host/Manage/Hardware and the HBA was showing up with the correct name. However, its passthrough setting had been switched to inactive. I toggled it to active, but there was an error in addition to the usual "Reboot required" message. I figured it didn't work, but I tried rebooting anyway. After reboot, the device now showed as passthrough active. Promising!

I then tried to start the FreeNAS VM, but no luck - another error. I found a support article for this one, and it said when this happens to remove and re-add the device. I tried that, and the VM booted and worked no problem!

So, a bit of a bumpy road, but success, ESXi 7.0 is now running great!

Thanks for your help Smiley Happy

View solution in original post

9 Replies
Alex_Romeo
Leadership
Leadership

Hi,

VMware Compatibility Guide - System Search

1. Does this warning mean ESXi would not be able to pass this device through to a VM like I've been doing under 6.7?

Answer:

The solution is to download the .vib driver of the unrecognized device and insert it inside the installation of ESXi 7.0

You can do it through a procedure similar to this (but you do it for Esxi 7):

http://woshub.com/add-drivers-vmware-esxi-iso-image/

2. Is there any hope of ESXi supporting this in the future, or does it generally mean they've dropped support for this device now?

Answer:

No.

Keep in mind that if your drivers aren't released in ".vib", you can't do anything. You need to upgrade your hardware or keep Esxi 6.7

ARomeo

Blog: https://www.aleadmin.it/
0 Kudos
zwbee
Contributor
Contributor

Thanks Alessandro, I had not heard about this driver injection method before.

So are you saying I could download a driver that was originally intended for 6.7, and use it in 7.0?

I believe this is the page for my device:

VMware Compatibility Guide - I/O Device Search

I can download a vib file from the top link (scsi-mpt2sas-20.00.01.00-1OEM.550.0.0.1331820.x86_64.vib), but they don't list ESXi 7.0 on this page. Are they just indicating which ESXi versions already include this driver?

Thanks for your help!

0 Kudos
scott28tt
VMware Employee
VMware Employee

That page shows a driver type of vmklinux, there’s no support for such drivers in ESXi 7

0 Kudos
vmwph
VMware Employee
VMware Employee

Surely that message can be ignored for devices you're going to pass-through. That ESXi doesn't have drivers for the card aren't relevant.

0 Kudos
zwbee
Contributor
Contributor

Thanks Scott, you're right, I missed that. I was a little confused here, because I ran the command "esxcli system module list | grep vmklinux" on my 6.7 install, and it came up with nothing. Apparently the module should show up there if you're using any vmklinux drivers. But the reply from vmwph​ has made me think this is because a passed through device may not actually need a driver/vib under ESXi. The closest info I could find on this was related to sharing GPUs between VMs versus passing them through to a single VM - apparently you only need a vib if you're sharing them. I'm guessing the same likely applies to my SAS card.

I do see some LSI/SAS output using the CLI, but I'm not sure that's actually for 'drivers' as such.

[root@ESXi:~] esxcli system module list | grep vmklinux

[root@ESXi:~]

[root@ESXi:~] esxcli software vib list | grep lsi

lsi-mr3                        7.708.07.00-3vmw.670.3.73.14320388    VMW     VMwareCertified   2019-10-23

lsi-msgpt2                     20.00.06.00-2vmw.670.3.73.14320388    VMW     VMwareCertified   2019-10-23

lsi-msgpt35                    09.00.00.00-5vmw.670.3.73.14320388    VMW     VMwareCertified   2019-10-23

lsi-msgpt3                     17.00.02.00-1vmw.670.3.73.14320388    VMW     VMwareCertified   2019-10-23

lsu-lsi-drivers-plugin         1.0.0-1vmw.670.2.48.13006603          VMware  VMwareCertified   2019-10-23

lsu-lsi-lsi-mr3-plugin         1.0.0-13vmw.670.1.28.10302608         VMware  VMwareCertified   2019-10-23

lsu-lsi-lsi-msgpt3-plugin      1.0.0-9vmw.670.2.48.13006603          VMware  VMwareCertified   2019-10-23

lsu-lsi-megaraid-sas-plugin    1.0.0-9vmw.670.0.0.8169922            VMware  VMwareCertified   2019-10-23

lsu-lsi-mpt2sas-plugin         2.0.0-7vmw.670.0.0.8169922            VMware  VMwareCertified   2019-10-23

[root@ESXi:~] esxcli software vib list | grep sas

scsi-megaraid-sas              6.603.55.00-2vmw.670.0.0.8169922      VMW     VMwareCertified   2019-10-23

scsi-mpt2sas                   19.00.00.00-2vmw.670.0.0.8169922      VMW     VMwareCertified   2019-10-23

scsi-mptsas                    4.23.01.00-10vmw.670.0.0.8169922      VMW     VMwareCertified   2019-10-23

lsu-lsi-megaraid-sas-plugin    1.0.0-9vmw.670.0.0.8169922            VMware  VMwareCertified   2019-10-23

lsu-lsi-mpt2sas-plugin         2.0.0-7vmw.670.0.0.8169922            VMware  VMwareCertified   2019-10-23

It looks like ESXi has a built-in recovery mode that lets you rollback to your previous install. Given that, I think I'll just give it a go and cross my fingers...

Thanks for your help!

0 Kudos
zwbee
Contributor
Contributor

For anyone else going down this road, here's where I ended up....

I ran the ESXi 7.0 upgrade via iso, ignored the 'unsupported hardware' warning, and the install completed successfully.

After logging into ESXi, all my VMs had started successfully... expect one. My FreeNAS VM had not started, and had a vague error about a PCI device. Clearly the passthrough HBA was not working.

I went to Host/Manage/Hardware and the HBA was showing up with the correct name. However, its passthrough setting had been switched to inactive. I toggled it to active, but there was an error in addition to the usual "Reboot required" message. I figured it didn't work, but I tried rebooting anyway. After reboot, the device now showed as passthrough active. Promising!

I then tried to start the FreeNAS VM, but no luck - another error. I found a support article for this one, and it said when this happens to remove and re-add the device. I tried that, and the VM booted and worked no problem!

So, a bit of a bumpy road, but success, ESXi 7.0 is now running great!

Thanks for your help Smiley Happy

View solution in original post

pbraren
Hot Shot
Hot Shot

Interesting! This sounds a bit similar to what I found happening to my GPU after upgrading to 7.0, explained in my article and demonstrated in my video.

Basically, once I've configured my AMD GPU properly for passthrough, the settings don't persist through host reboots, and I have to use ESXi Host client (NOT vSphere Client) to toggle them off then on again, and tada, no 2nd reboot needed, the GPU mapped VM can now boot up without issue.

TinkerTry.com
0 Kudos
meoli
Enthusiast
Enthusiast

Came from here (Im the OP on this Thread)

https://www.reddit.com/r/vmware/comments/gyc4kp/esxi_70_where_is_the_passthru_information_stored/fta...

and i've saw the Link to the Blog tinkertry and followed the link to this Thread 🙂

Just in case, i've created a Support Ticket within the Pacific Beta for this strange GPU Passtrough behavior (Ticket Number: 20105602002 ) and gave logs, altough there wasn't anything strange for the support to see, but it was between the Beta testing and then the GA Version and the support told me to await the GA Version (which still has this bug in).

The Problem for me, was that after every reboot, the GPU Device is lost from the passtrough entry (aswell as the entry in the ah-trees.conf). I have tried to move a  backup of the ah-trees.conf file with all Devices passtroughed to its' original destination but it seems my skript within rc.local doesn't run.

The HDMI Device always stays passtroughed. And while the ESXi is booted i can just disable and enable the passtrough for the whole GPU (RX 5700 XT and RX480 and RX580, altough the RX 5700 XT still has the GPU Bug, but that lies within AMD to fix it in their CPU Microcode)

Then there is the "Bug" where VMs with passtroughed USB PCIe Cards won't boot if on the same USB Card Cables or anything is attached (The VM won't even show the VMware Bios Bootlogo).

Workaround: detach everything from this USB PCIe Card before you even think to start the VM and then power up your VM and then connect your USB Cables.

For now i have downgraded one host back to 6.7 U3 where both problems don't exist (One VM on this Host is used for VR Stuff).

My other Host with this passtrough configuration is still on 7.0 GA just because here i have better access to the USB PCIe Card and the PGU-Passtrough isn't buggy (other Mainboard). Maybe a new ESXi Update will fix this.

0 Kudos
pbraren
Hot Shot
Hot Shot

Gladly, I stumbled upon a work-around that's sticks/doesn't have to be re-done with every ESXi host reboot. I've added the details and warning here:

https://TinkerTry.com/vmware-vsphere-esxi-7-gpu-passthrough-ui-bug-workaround#workarounds

Unfortunately, over 1,600 folks have apparently viewed my previously suggested (non-sticky) toggle workaround video, which would seem to indicate a lot of people have been affected by this issue. I cannot think of any reason somebody would watch that video unless they were having this same issue. While I'm glad to have helped, there are clearly still more issues with passthrough that people in this thread have experienced. I really appreciate it when folks take the time to open a ticket on a currently supported for ESXi 7.0 system, a situation I'm not currently in unfortunately, in these early days since 7.0 release. If somebody does manage to get these sort of issues reported to VMware, please let us know here. Thank you!

TinkerTry.com
0 Kudos