Configuration: Dell PE 6650, 4 CPU, 4GB, PERC 3/DC, 3 36GB drives configured as JBOD.
Trying to install ESX 3.5 from a CD burned with the ISO image from www.vmware.com/download.
After install is finished, when the server reboots it terminates with the message "Mounitng root failed. Dropping into basic maintenance shell."
The same thing happens if I try to upgrade a successful installation of ESX 3.0.2 to ESX 3.5.
Booting to the "Service Console only" mode is successful and the installation root disk is successfully mounted.
This sounds like there is something hosed in the VMware startups.
Anyone else had this problem? Any suggestions to resolve this problem?
Thanks!
01:00.0 RAID bus controller: Adaptec Adaptec Rocket (rev 02)
Subsystem: International Business Machines ServeRAID 8k/8k-l8
Flags: bus master, fast devsel, latency 0, IRQ 22
Memory at e7a00000 (64-bit, non-prefetchable)
Memory at e7e00000 (64-bit, prefetchable)
Expansion ROM at <unassigned>
Capabilities: Message Signalled Interrupts: 64bit+ Queue=0/2 Enable-
On the IBM 336 xSeries running VMware ESX 3.5, LSPCI -v shows:
00:if.2 IDE Interface: Intel Corporation 82801EB (ICH5) SATA Controller
On the same machine when running ESX 3.02 which works fine, LSPCI -v shows:
00:if.2 IDE Interface Intel Corporation Unknown device 24d1
I have Dell 6850. Executed the following command to increase the queue depth
esxcfg-module -s ql2xmaxqdepth=64 qla2300_707_vmw
and esxcfg-boot -r
After reboot, I got the same error "Mounting root failed" and busybox prompt.
I can login into troubleshooting mode without any issues & it mounts the root volume.
I tried executing
esxcfg-boot -p
esxcfg-boot -r
but, no luck.
Managed to fix my "mounting root failed"...
Downgraded the Perc 3/DC firmware from 199D to 199A and now ESX 3.5 boots up
VMware officially do not support this RAID card, don't know why?!
where the 199A firmware can be downloaded? I've searched on the dell site but i could only find the latest version.
thanks
Confirmed! Downgrading the firmware on the PERC 3/DC controller from rev 199D to rev 199A resulted in a bootable VMware 3.5 server.
thank you!
Same/similar problem here. HP BL460c, internal RAID used for ESX installation. Had 3.0.2, all patches installed, running just fine, used the 3.0-3.5 tarball to upgrade. Everything seemed fine, no disks anywhere near going full, no error messages during upgrade. After reboot, can't mount root. I figured the quickest solution for me was to do a clean 3.5 installation rather than trying to find out what went wrong. Feels more clean anyway, but of course a serious problem for those not having that option.
Clean 3.5 installation went fine, and it boots. Having some iSCSI problems now, but that is another story.. Back on 3.0.2 for now (I had mirrored disks so I could go back to 3.0.2 with minimal effort)
I had the same issue. I have a PE2600 with Perc 4Di. I had the latest firmware ans was getting the same error. I rolled back the FW to the following and 3.5 works great.
Let me know if this helped.
Running firmware 199A does indeed allow 3.5 to boot and appear to be working.
I have however another problem. If I install 2-3 VMs on the local VMFS it works fine. Any subsequent VMs seem to get disk corruption.
I would run chkdsk on a VM and it would find errors, it will then repair and if I run chkdsk again it will again find errors, the errors are also not the same.
I have tried creating new VMs, restore exising VMs, nothing seem to work part around 2-3 VMs.
3.0.2 works just fine, Even VMs that seem to have corruption work if I leave the VMFS volume intact when downgrading from 3.5 to 3.0.2.
Rolling back to 3.0.2 again.
I have only 2 VMs on my PE2600. They are in production. I would love to add couple of more to test. The VMs that are getting corrupted are the new VMs or the old VMs?
I was able to get mine working. Call technical support. Apperently in the ESX 3.5 Build I had there was an RPM that did not run during the upgrade. VMWare has since released a new build that corrects this problem.
I tried it with both existing and new VM's. I have rebuild the server between each test.
Also installing 3.0.2 over 3.5 and only keeping the VMFS then those VMs work again.. It is really strange.
I have actually installed the latest iso thinking that it might have to do with the rpm issue, but it wasn't.
I ran into the same issue, with ESX 3.5, running a HP Proliant DL 145, 2 CPUs with 4GB RAM. I have tried starting in troubleshooting mode, and got this error:
Initialization of vmkernel failed, status 0xbad0013
I found a solution for this here:
and followed the description there, which said that the DIMM memory cards should be installed as 2 per each CPU, and not 4 for one CPU. This solved my issue.
I am posting this here, as the initial issue I saw was Mounting Root Failed and I invested quite some time following the messages in this thread without luck, and with a lot of work.
I hope this will help some more people.
I had a similar issue today after updating the ql4xmaxqdepth setting for my QLogic 4062's on 2 of my ESX boxes. After rebooting I was getting the "Mounting root failed". Ended up rebooting into troubleshooting mode and then removing the last line in the /etc/vmware/esx.conf file, which referenced the setting changes I had made (looked something like "vmkernel....esxcfg-module -options ql4xmaxqdepth=200........").
Anyway hope that helps someone now I have to find out what the proper value is seeing how the EqualLogic documents say it should be 200.
Thread moved to the correct sub-forum
If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points
Tom Howarth VCP / vExpert
VMware Communities User Moderator
Blog: www.planetvm.net
Contributing author for the upcoming book "[VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment|http://my.safaribooksonline.com/9780136083214]”. Currently available on roughcuts