PatrickDLong's Posts

While I agree with earlier posters tayfundeger  and Tibmeister​ that the Gen8 servers from HPE are no longer on the ESXi 6.7 HCL and you should not run that combination in a production environmen... See more...
While I agree with earlier posters tayfundeger  and Tibmeister​ that the Gen8 servers from HPE are no longer on the ESXi 6.7 HCL and you should not run that combination in a production environment because it is unsupported, in my opinion JimKnopf99 's issue is more likely the result of changes to the ESXi 6.7 U3  handling of hardware sensors and resultant increased logging of Hardware Sensor Status events - his screenshot shows the 0.23.11 device System Chassis 1 UID in "Unbekannt" or unknown state.  This is NOT due to his Gen8 host no longer being on the HCL -  I have the same "unknown" status on the same 0.23.11 System Chassis 1 UID device on ALL my Gen9 and Gen10 hosts (that device ID is slightly different on Gen10's - 0.23.1174) that have been upgraded to 6.7 U3.  The screenshot below is from a Gen9 BL460c running 6.7 U3, and that Gen9 hardware (and obviously Gen10 as well) is absolutely on the HCL for the 6.7 U1-3 releases.  I have all latest firmware, iLo, drivers, and HPE managment .vibs installed on these hosts. JimKnopf99 I would try the workaround described in VMware KB 74608 to disable WBEM services on your host, so long as you can deal with disabling ALL vSphere hardware alerting for these hosts, a workaround which I choose not to implement.  It's not an ideal workaround IMO; akin to your car dealer telling you that the workaround to inadvertent check engine light is to snip all the wiring going to your dashboard until they come up with a fix for the root cause.  We have been waiting patiently more than 2 months for the first post-U3 patch to be released which hopefully will address this hardware sensor issue.  See a longer post with more details here: Too many events "Host hardware sensor state" after ESXi upgrade 6.5 to 6.7U3 Cheers, Patrick
Wanted to note that there is updated content in VMware Knowledge Base article today that, as near as I can tell, give you a NEW option to create a rule to ignore hardware sensor events only from ... See more...
Wanted to note that there is updated content in VMware Knowledge Base article today that, as near as I can tell, give you a NEW option to create a rule to ignore hardware sensor events only from specific hardware sensors.  Oh great - instead of telling me to cut the wires to ALL the warning lights on my car's dashboard now the recommendation is to just snip the wire to the light that corresponds to where the invalid error is originating.  I guess I'm the only one finding this type of solution to be totally unacceptable - how about releasing a patch which actually *resolves* the excess hardware sensor alert generation in the first place? Why hasn't even a single post-U3 patch been released yet?  At this point in the patching lifecycle there had already been TWO patches released post-U2, and a third post-U2 patch was only 8 more days away from release.  It's now been *62* days since the release of U3 and crickets from VMware.  I gotta be honest, I'm in a large enterprise infrastructure and manually truncating my SEAT disk db tables every 4-5 days to avoid vCenter being inaccessible is not sitting well with me at this point. I was willing to do it as a temporary workaround, but TWO MONTHS.....  And yes, I still want to receive valid hardware alerts from vCenter - it's not my only alerting mechanism, but it provides important redundancy in the event of a hardware failure on a host. update-from-esxi6.7-6.7update02  -  04/11/2019  -  U2 release day ESXi670-201904001  -  04/30/2019  -  19 Days after U2 release ESXi670-201905001  -  05/14/2019  -  33 days after U2 release ESXi670-201906002  -  06/20/2019  -  70 days after U2 release update-from-esxi6.7-6.7_update03  -  08/20/2019 U3 release day                                                             today 10/21/2019 is *62* days after U3 release date and nothing....
Anyone else find this "workaround" to be decidedly sub-optimal? I mean,when the issue is "our QA team didn't catch that log growth on the vCenter SEAT volume is dramatically higher in the new rel... See more...
Anyone else find this "workaround" to be decidedly sub-optimal? I mean,when the issue is "our QA team didn't catch that log growth on the vCenter SEAT volume is dramatically higher in the new release due to an unmitigated FLOOD of host hardware sensor state messages" the answer can't simply be "well, everyone just turn off WBEM and stop monitoring your host health in vCenter.  Problem solved!" or "Just manually truncate tables in the vCenter db - what could possibly go wrong..." That's like saying if you're experiencing an issue with datastores filling up simply turn off capacity alerting in vCenter.  This sensor-state alerting problem is not vendor-specific so there is no good reason this should not have been discovered prior to releasing 6.7U3 out the door, or at a bare minimum I hope there is now a new checklist item for VMware QA to look at overall log write rates against a baseline when evaluating new build candidates for GA.  I  guess as a short-term workaround to keep vCenter up and running I can accept the workaround - I'm truncating the VCSA logs because I actually WANT to receive hardware health alerting in vCenter -  but we're now almost 5 weeks past the GA date of 6.7u3 and there does not seem to be any sense of urgency in releasing a real resolution to this issue via host patch.  Am I over-blowing this whole thing?