1 person found this helpful
This is same discussed in the kb as below but removing the vib should not harm anything. I am unsure why it shows not compliant, possibly you can remove the one with 1493 kb and install the one with 1602kb
If you remove existing one available in the list, then there is only one which is removed so when scanned against the baseline, the vib is missing could lead to non-compliant behavior
Interesting. I see in the kb: "Workaround: Create a custom baseline without the conflicting vib."
I've done this and it does seem to work. With that vib removed, the host shows as non-compliant with the default built-in baseline. I'll have to make sure that default baseline remains unattached since another admin would be likely to come along in a few days/weeks and apply it (thus reinstalling the conflicting vib).
So I think I may brought this on myself by having the HPE vibsdepot as a patch source in addition to the default VMware repository. I just reset VUM back to defaults and it seems to have cleared all of this up.
Just to add to this discussion, we have an identical problem to the one you've reported. In our case, we are adding two new HPE Gen10 Blade Servers to a system with 630FLB and 534M HBAs, Virtual Connect switches, and a 3PAR system, all operating over an iSCSI SAN, so we are definitely using hardware that depends on this VIB and related drivers.
Like you, we started with the HPE 6.7U2 Custom ISO. Based on HPE's Technical White Paper at:
we were aware of the conflicting VIB at the outset of this exercise (see Appendix O in this document). However, as you have noted, deleting this VIB does not resolve the problem, and after many attempts, we just keep going around in circles, always coming back to the state where we have duplicate, incompatible "elx-esx-libelxima" VIBs. One VIB shows up as "elx-esx-libelxima.so" while the other usually shows up as "elx-esx-libelxima.so-8169922." The Creation Dates for these two VIB variants vary, depending on the order In which we try to remediate these hosts. I've also tried manually installing the latest version of this VIB after deleting the duplicates, but that actually made the remediation problem worse.
We also see problems with the "hpe-driver-bundle-670," of which there are five versions, including 10.2.0, 10.3.0, 10.3.5, 10.4.0, and 10.4.1. Our ESXi host is actually using the "hpe-driver-bundle-6184.108.40.206," which is the latest from April 10, 2019. After we remediate one of our new ESXi Gen10 hosts, we see that we are non-compliant with the VMware Predefined "Non-Critical Host Patches" baseline. What is interesting is that there are only two patches that are not compliant:
But this makes no sense, given that hpe-driver-bundle-6220.127.116.11 is what we actually have installed.
Honestly, this is a maddening problem. Even following HPE's instructions to delete this "elx-esx-libelxima" VIB results in a situation where remediating restores the two conflicting VIBs, which then prevent further remediation. It isn't clear which vendor is the source of this problem, but it seems absurd to be having this problem no matter what the source is.
We're still rolling with the VUM defaults. We haven't put the HPE vibsdepot URL back in (as much as we'd like to).