Just upgraded from ESXi u3d 19482537 to u3f 20036589 but now it can't
see my fibre channel storage anymore.
This is on a Dell R640 with a Emulex LPe12000-S 8Gb Fibre Channel Adapter lpfc
card installed. Was a Sun card originally SG-XPICE2FC-EM8-Z and has been working fine until the u3f upgrade.
One system I still have running the u3d and it see this:
[root@bondi:~] esxcfg-scsidevs -a
vmhba0 lsi_mr3 link-n/a sas.5d0946600d28f000 (0000:19:00.0) Broadcom PERC H740P Mini
vmhba1 vmw_ahci link-n/a sata.vmhba1 (0000:00:11.5) Intel Corporation Lewisburg SATA AHCI Controller
vmhba2 vmw_ahci link-n/a sata.vmhba2 (0000:00:17.0) Intel Corporation Lewisburg SATA AHCI Controller
vmhba3 lpfc link-up fc.20000090fa0cfd92:10000090fa0cfd92 (0000:65:00.0) Emulex Corporation Emulex LPe12000-S 8Gb Fibre Channel Adapter
vmhba4 lpfc link-up fc.20000090fa0cfd93:10000090fa0cfd93 (0000:65:00.1) Emulex Corporation Emulex LPe12000-S 8Gb Fibre Channel Adapter
But the u3f system doesn't see that:
[root@indigo:~] esxcfg-scsidevs -a
vmhba0 lsi_mr3 link-n/a sas.5d09466003def900 (0000:19:00.0) Broadcom PERC H740P Mini
vmhba1 vmw_ahci link-n/a sata.vmhba1 (0000:00:11.5) Intel Corporation Lewisburg SATA AHCI Controller
vmhba2 vmw_ahci link-n/a sata.vmhba2 (0000:00:17.0) Intel Corporation Lewisburg SATA AHCI Controller
[root@bondi:~] vmkchdev -l |grep 10df
0000:65:00.0 10df:fc40 10df:fc42 vmkernel vmhba3
0000:65:00.1 10df:fc40 10df:fc42 vmkernel vmhba4
[root@indigo:~] vmkchdev -l |grep 10df
0000:65:00.0 10df:fc40 10df:fc42 vmkernel
0000:65:00.1 10df:fc40 10df:fc42 vmkernel
I don't see this card explicitly mentioned in the HCL though it worked fine until today.
The lpfc driver was updated in u3f...
old 14.0.169.25-5vmw.703.0.35.19482537
new 14.0.543.0-1OEM.700.1.0.15843807
Is it possible to reinstall the old driver from u3d or is there something I can do to make
it work under u3f?
If not, may have to rollback to u3d... which would be a shame as I use this Essentials licence platform as a pre-production testing system for our Standard licence setup. Don't really want to have to buy replacement hardware...
Ian D
That looks like a problem we had with 6.7 and HPE ISO. But this was with a Emulex 12.x lpfc version, this version was broken and we did not see any LUNs of our storage. It was working with older and newer versions, but not with certain versions. VMware's inbox driver was always working (but old). But you are on version 14.x. Maybe you have to update the FC adapter firmware for the most recent driver?
Emulex Drivers for VMware ESXi Release Notes (broadcom.com)
From the 2 versions you posted, it seems the first is also native VMware inbox driver and the new one is the latest one from Emulex. Emulex driver just suck and they are broken more or less constantly.
There are other Emulex lpfc driver versions available at Herunterladen VMware vSphere - VMware Customer Connect
You can try one of them or download the one you had before from here: https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/lpfc/VMW_bootbank_lpfc_14.0...
Looking at the Drivers list Release_Notes_lpfc-14.0.326.12 for a slightly older lpfc driver than u3f provides:
it says that that support for the LPe12000 series HBA's was discontinued in this driver.
The u3f lpfc 14.0.543 driver release notes don't mention this at all.
The release notes for u3f as a whole doesn't say anything about this at all!
Anyway following your advice (many thanks) I reinstalled the u3d driver for lpfc, and it seems to work again,
though I'm not sure if there will be issues ongoing and whether that driver will work in future ESXi releases.
[root@indigo:~] esxcli software vib list |grep lpfc
lpfc 14.0.543.0-1OEM.700.1.0.15843807 EMU VMwareCertified 2022-07-20
[root@indigo:~] esxcli software vib remove -n lpfc
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMW_bootbank_lpfc_14.0.169.25-5vmw.703.0.35.19482537
VIBs Skipped:
[root@indigo:/tmp] esxcli software vib install -v /tmp/VMW_bootbank_lpfc_14.0.169.25-5vmw.703.0.35.19482537.vib
Installation Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMW_bootbank_lpfc_14.0.169.25-5vmw.703.0.35.19482537
VIBs Removed:
VIBs Skipped:
Then rebooted. Lets see how that goes...
According to VMware Compatibility Guide - I/O Device Search the supported versions for SG-XPCIE1FC-EM8-Z are either native driver 14.0.169.25-5vmw or Emulex 12.8.614.16.
But this is always a mess, usually the storage vendor tells what driver/fw on server side are supported - at least that's what HPE is telling us over an over. So in the end it's trial and error. Using native VMware drivers is probably a good idea as VMware then also takes care of issues. Which is a problem if you use the async vendor drivers
FYI, I updated another u3d ESXi server to u3f and found the lpfc driver wasn't updated until I applied the "Non-Critical Host Patches",
in which case I _then_ lost access to the SAN until I manually rolled the lpfc driver back.
Thanks for sharing this issue. I can confirm the same issue applies to LPe16000 HBAs as well. I excluded all OEM variants of the lpfc driver in my vLCM baseline as a result. The downside of this approach is that you can easily create a patch baseline with dynamic criteria, but exclusions are static. So this means we need to keep updating the baseline with new exclusions. How did you work around this problem?
That's still my problem too. I created a custom extension baseline with the drive version, but then I also need to create a custom non-critical baseline without the newer driver.
And for vSAN I've a similar problem, the async driver for HPE E208i-a ctrl in latest non-critical baseline is not on the HCI for vSAN, so I get a warning. I've I install the old one and create a custom extension baseline I again have to clone the predefined non-critical and blacklist the current driver there.
And each cluster has it's onw issues and custom baselines. What a mess.
Also keep in mind that sometimes the vendor updates the drivers in their custom ESXi images. For example we noticed that when we want to upgrade from 6.7 to 7 we shouldn't use the latest version of their ESXi image A06 because it contains a version of the lpfc driver that doesn't work. Using the A04 version works just fine.
Hi all,
Had the exact same problem after updated to U3g this morning, such a mess!
Luckily my cluster was strong enough to hold the load.
Thanks for all the commands, this helped too. 😉
VMware support pointed me to this - Broadcom Emulex End of Life HBA models not detected with 14.0.x.x drivers
Which is confusing because the document says "end of life" yet I see a big Y next to supported for my HBA and 7.0 U3
Can anyone guide me on how best to remove the newer driver from the non-critical baseline so I don't have to reinstall the driver each time I patch?
If the drivers are not in the update then you may have to download them and manually update the HBA. First you have to check the compatibility guide to see if it is compatible. Please see the kb's below.
VMware Compatibility Guide - System Search
Determining Network/Storage firmware and driver version in ESXi (1027206) (vmware.com)
How to download and install async drivers in VMware ESXi (2005205)
In our case, we had to upgrade the HBA driver to (VMware ESXi 7.0 lpfc 12.8.614.16 FC ) Download VMware vSphere - VMware Customer Connect prior to patching to the latest ESXi version.
Very grateful for your help. You saved my life with the download link and how-to instructions. I updated a company server, and lost communication with the storage unity