Using HPE 6.7U1 image "VMware-ESXi-6.7.0-Update1-10302608-HPE-Gen9plus-670.U184.108.40.206.12-Oct2018.iso" on a ESXi 6.5U2 Host with a "HPE-ESXi-6.5.0-Update2-iso-Gen9plus-650.U220.127.116.11.5" the Update Manager reports it as Non-Compliant which correct.
But after the upgrade I have a 6.7 Host which reports as "Incompatible" due to "Cannot create a ramdisk of size 359MB to store the upgrade image. Check if the host has sufficient memory."
But I have set ScratchConfig.CurrentScratchLocation and it persists after reboots.
When I reboot the host after 6.7 the host revert to 6.5.
A vdf -h shows:
Ramdisk Size Used Available Use% Mounted on
root 32M 2M 29M 7% --
etc 28M 364K 27M 1% --
opt 32M 532K 31M 1% --
var 48M 632K 47M 1% --
tmp 256M 728K 255M 0% --
iofilters 32M 0B 32M 0% --
shm 1024M 0B 1024M 0% --
hostdstats 1303M 12M 1290M 0% --
snmptraps 1M 0B 1M 0% --
Is it actually the SD-Card which lacks space or what ?
Also check if the root account on ESXi still has "administrator" access.
Check /etc/vmware/hostd/authorization.xml. For root, the ACEDataRoleId value should be set to '-1', which means Administrator.
Is your issue resolved or still facing ? Let me know so I can further assist accordingly.
Good to know. Once solved, let us know the cause and resolution as provided by VMware support.
Personnally I'm currently investigating whether this devListStabilityCount=10 needs to be added to /bootbank/boot.cfg
I saw that before upgrade bootbank -> /vmfs/volumes/0c517feb-0fb973e9-b89d-76f429f6fd45 and after bootbank -> /tmp.
After reboot it reverted back to bootbank -> /vmfs/volumes/0c517feb-0fb973e9-b89d-76f429f6fd45.
That led to this article: VMware Knowledge Base
But I'm not sure that applies to upgrades to 6.7U1 so I'm testing if adding that devListStabilityCount=10 makes a difference.
There are two LUNs on that SC Card which the ESXi Host boots from and no utility to delete the last LUN. You can delete the partitions on that last LUN but not the LUN itself.
The main problem is that the upgrade does not see the first LUN and upgrade is performed to /tmp instead of the Bootbank.
So the upgrade succeeds until you reboot the host. Then it boots up as ESXi 6.5U2 instead of 6.7U1
Neither upgrade through Update Manager nor by mounting the ISO file on the Host to upgrade makes any difference. Only a clean install does.
I installed on the two troublesome Host and they work now.
I have 6 other hosts with only one LUN each on their SD Cards and those were upgraded without any insident.
Turned out to be a HPE Embedded User Partition. Disable it as a Boot source and delete it in the server BIOS. That solves the problem.
https://support.hpe.com/hpsc/doc/public/display?docId=c04398276 page 60 Delete Boot Option.
1 person fandt dette nyttigt
Thanks for sharing the root cause and details. It will definitely be helpful to the community folks.
On another note, HPE being HPE. ¯\_(ツ)_/¯
i got a simlar problem on a dl360 gen10. upgrading from 6.5 to 6.7 update 2 reverted back to 6.5 on second reboot after the upgrade. bootbank got mounted to /tmp, all nasty.
Gen10 with ilo 5 do not have embedded partitions etc which can be enabled/disabled. So that was not the problem/fix here.
troubleshooted for quite some time and found out that after upgrading to 6.7 update 2 , the vmkusb driver was not loaded. esxcfg-scsidevs -a did not output an entry for the mirrored usb boot device as it did in 6.5. so i suspected the problem was related to vmkusb driver in 6.7 combined with the 8GB mirrored microsd usb adapter. No further root cause known to me at the moment.
more info about the vmkusb driver : VMware Knowledge Base
what i did to perform the upgrade succesfully eventually:
i rebooted the 6.5 host, hitting shift+O on the bootscreen of ESXi
i added preferVmklinux=TRUE as a boot parameter.
the host wil boot using the vmklinux drivers for the usb storage instead of the vmkusb. You will see the output of esxcfg-scsidevs -a change to a vmhba32 entry. the driver used for the bootdevice is now called "usb-storage" instead of "vmkusb".
i disabled the vmkusb driver with
esxcli system module set -m=vmkusb -e=FALSE
i performed the upgrade to 6.7 (in my case using the cli and a zip file as a depot, but should work with VUM as well i think) and rebooted. bootbank and altbootbank mounted correctly and everything seemed ok, also performed a second reboot. looking good.
checked the kernelparameter using:
esxcli system settings kernel list | grep prefer
check to see if preferVmklinux is set to enabled. it was in my case, if not use:
esxcli system settings kernel set -s preferVmklinux -v TRUE
The disadvantage with this solution is you lose Quickboot as a feature. that's really disappointing.
I also tried to revert to vmkusb after the upgrade, just curious, but that resulted in not loading vmkusb again and bootbank mounting to /tmp etc. no good.
So stuck with legacy usb drivers it seems and no quickboot
i'm done with the sd-bootcards and usb mirrored boot solutions for sure!
Better pay up, and stay up!
Thanks for sharing this!
Have you got this checked with VMware support if there is a workaround possible so you don't lose quickboot ?
i have not contacted support for this yet. Requirements for quickboot are pretty clear, it cant work with the legacy drivers enabled. The strategy for the legacy drivers is clear as well. I dont have high hopes for any engagement with support. I might do it later, but im hesitant because i suspect it will take too much of my working with support, collecting logs just to get to the point i'm right now.
I found something on web similar to your question, see if this useful to you - https://www.nuttycloud.com/cannot-create-a-ramdisk-of-size-387mb-to-store-the-upgrade-image-check-if...
I found the same kind of error when I'm trying to upgrade from 6.0 to 6.5 a month ago.