I'm new to ESXi and I'm having trouble applying patches. Some background info on the server:
Single ESXi 5.5.0 host on build 2068190
Server is a Lenovo ThinkServer TS140. In the BIOS, SATA configuration is set to AHCI.
I have an LSI RAID card (Model 9341-4i) installed as well. I suspect that this very relevant to the issue.
When installing ESXi, I created a custom ISO that came with the drivers for the storage adapter (I don't recall exactly, but I believe the install of the OS failed until I did this).
I use the command esxcli software vib install -d /vmfs/volumes/[DATASTORE]/[PATCH_FILE].zip to apply a patch, and the CLI informs me that this is successful. I see the build ID reported as 2143827, which is the next patch, so everything appears successful. However, when I reboot, a message appears saying "the esxi host does not have persistent storage after reboot" and none of my datastores are found. I reboot once more, and the build ID rolls back to 2068190, and my datastore and VMs are all working properly again.
I found similar symptoms reported here: http://www.vm-help.com/esx/esx3i/no_persistent_storage_after_upgrade.php
They suggested enabling "LVM.EnableResignature"
However, I have found that there is no LVM option in vSphere under Advanced Settings, and cant seem to find LVM anywhere in the UI.
I also tried to re-add the storage as outlined here: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=101138...
However, when I went to "Add Storage" there were no storage devices listed.
Anyone have an idea as to why the ESXi host wont take patches, and how to fix it? I'm happy to provide more details if anyone needs.
Thanks for any help.
The issue in this case is likely, that the custom megaraid-sas vib - required for your RAID controller - is replaced by the update:
2015-03-20T11:37:40Z esxupdate: imageprofile: DEBUG: VIB LSI_bootbank_scsi-megaraid-sas_6.606.06.00-1OEM.500.0.0.472560 is being removed from ImageProfile ESXi-Customizer
2015-03-20T11:37:40Z esxupdate: imageprofile: INFO: Adding VIB VMware_bootbank_misc-drivers_5.5.0-2.54.2403361 to ImageProfile ESXi-Customizer
What you need to do is to ensure that either the driver is not replaced (i.e. only install the required vibs, which might be difficult), or - after installing the patch, and before rebooting the host - replace the megaraid-sas driver again with the required one.
The latest driver can e found at https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI55-LSI-SCSI-MEGARAID-SAS-66081100-1OEM...
André
I'm having this same problem with a Smart Array p410 storage controller. I'm running ESXi 5.1 and did a reboot after power outage and it came up with the dreaded "the esxi host does not have persistent storage" message. All the storage looks like 'Block SCSI' and it sees my 3 VMs but they are all just labeled 'Inaccessible'. I believe ESXi did an automatic update in the background as I haven't made changes since I installed ESXi 5.1. I also saw the linked article and I don't have LVM either, so that didn't work. Anybody know the answer?
Hi. Did you get a resolution to this?
I have exactly the same issue with ESXi v5.5 2068190 running on three HP Proliant DL380 Gen9 servers with the same "No persistent storage" error after applying a patch, with a subsequent reboot reverting back to patch level 2068190 where the storage can be seen. I have the same ESXi version that I have successfully patched running with no issues on HP Proliant DL380 G7 and Gen8 servers with no issues. I am stumped by this. I have used the latest HP Customized CD to build the ESXi servers
If you have a resolution, please can you post it? Thanks
No, unfortunately the issue is still unresolved. I called VMware this week to see how much a one-time support ticket costs... $300. I'm not sure what to do next. Interesting to note, you and I are both on the exact same build ID.
If it helps: I noticed that on the HP Proliant DL380 Gen9 servers, the RAID controller VIB (hpsa for the HP servers) was being downgraded during the applying of the patch. I could see that the patch does contain an older version of the hpsa VIB but I have no idea why it is being downgraded. The same setup on a HP Proliant DL380 G7 or a HP Proliant DL380p Gen8 server does not have any issues with the downgrading of the hpsa driver - it only seems to be the newer model of server. I tried three different HP Proliant DL380 Gen9 servers with the same results
Maybe it is the same for your IBM server? I've not figured out a fix yet though so I am just running the ESXi server unpatched.
I'm not sure it would be. The RAID controller I'm using is LSI, and the server itself is a Lenovo. No IBM products here. Mine is also unpatched and my vulnerability scans are ugly 😞
Hi,
could you please share the esxi kernel log. Might be we get some clue.
Thanks
Hi,
Could you please run the below command in one esxi host
esxcfg-volume –M "UUID of datastore"
Once command ran successfully then reboot the esxi host and check the status of DS.
Hope it will help If you liked this post, just share
Thanks
Here is the ESXi update log after applying the patch and rebooting:
2015-08-29T15:20:57Z esxupdate: root: INFO: Command = profile.setacceptance
2015-08-29T15:20:57Z esxupdate: root: INFO: Options = {}
2015-08-29T15:20:57Z esxupdate: HostImage: INFO: Installer <class 'vmware.esximage.Installer.BootBankInstaller.BootBankInstaller'> was not initiated - reason: altbootbank is invalid: Error in loading boot.cfg from bootbank /bootbank: Error parsing bootbank boot.cfg file /bootbank/boot.cfg: [Errno 2] No such file or directory: '/bootbank/boot.cfg'
2015-08-29T15:20:57Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/bootOption', '-rp']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-08-29T15:20:57Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/bootOption', '-ro']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-08-29T15:20:57Z esxupdate: HostImage: INFO: BootbankInstaller is not initialized, no need to keep LockerInstaller
2015-08-29T15:20:57Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '['/sbin/esxcfg-advcfg', '-U', 'host-acceptance-level', '-G']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.
2015-08-29T15:20:57Z esxupdate: root: DEBUG: Finished execution of command = profile.setacceptance
2015-08-29T15:20:57Z esxupdate: root: DEBUG: Completed esxcli output, going to exit esxcli-softwareinternal
and here is the kernel log:
Again just to recap I applied the next patch (ESXi550-201410001) to the server and rebooted as instructed. The build ID changed to 2143827 as expected, as reported by the host itself with a monitor plugged in. Upon boot, no datastores were found and vsphere warned me that the ESXi host does not have persistent storage.
I rebooted again and the build ID rolled back to 2068190, and the datastore mounted properly. After the second reboot, here's the kernel log:
Kernel Log After Reboot - Pastebin.com
and here's the update log:
The issue in this case is likely, that the custom megaraid-sas vib - required for your RAID controller - is replaced by the update:
2015-03-20T11:37:40Z esxupdate: imageprofile: DEBUG: VIB LSI_bootbank_scsi-megaraid-sas_6.606.06.00-1OEM.500.0.0.472560 is being removed from ImageProfile ESXi-Customizer
2015-03-20T11:37:40Z esxupdate: imageprofile: INFO: Adding VIB VMware_bootbank_misc-drivers_5.5.0-2.54.2403361 to ImageProfile ESXi-Customizer
What you need to do is to ensure that either the driver is not replaced (i.e. only install the required vibs, which might be difficult), or - after installing the patch, and before rebooting the host - replace the megaraid-sas driver again with the required one.
The latest driver can e found at https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI55-LSI-SCSI-MEGARAID-SAS-66081100-1OEM...
André
Hint: Especially with custom installations, always run the installation of patches with the --dry-run option first, to find out which vibs will be replaced.
André
Andre, you are INCREDIBLE. I can't thank you enough! I ran the patch, then installed the offline bundle, and after reboot my datastore is properly mounted and the patch applied successfully!! This problem has been a thorn in my side for MONTHS! Thank you!
I'll give it a try as well and will let you know - with my issue, I am sure it is downgrading the hpsa driver so the same should apply. Is there a way of amending the patch zip files to remove the RAID driver from it to stop it being applied?
AdamUK,
I don't know enough about ESXi to say for sure, but Andre said above that you could install the individual VIBs if you need to. However, I found it simpler to just install the LSI drivers after each patch. A bit of extra work, but really not that big of a deal. I now have a fully patched ESXi host, after MONTHS of serious vulnerabilities. The fix was so simple, I'm almost kicking myself (but really, I'm too new at this, there's no way I could have known). I learned a lot though, that "--dry-run" command is SUPER helpful.
AdamUK
With HP hardware - especially the latest Gen9 generation - you should be careful and not mix driver versions. What I would recommend is to use the HP OEM image, or offline bundle to update the host (downloadable from https://my.vmware.com/group/vmware/details?downloadGroup=HP-ESXI-5.5.0U2-GA&productId=353). On this web page, you can also find a document which lists all the different drivers which are added/replaced in the OEM image.
Hint: You may receive an error message when you run to update. I did this on a couple of DL380 Gen9 hosts so far, and was able to resolve this issue be removing the "net-mst" vib prior to running the update (this solution assumes that this Mellanox driver is not required).
André