VMware Cloud Community
FrankGrey
Contributor
Contributor

VMWare ESXi Installation Failing - Help!

Hello,

Although I have used VMWare Workstation in the past, I am new to VMWare ESXi, so please bear with me on this. I will try to be as detailed as possible. I recently purchased components to a server that I would like to try installing ESXi on as a test bed before rolling it out to other hardware. Unfortunately, things are not going as I would have hoped, ESXi 3.5 is not installing. The message I am getting is not terribly helpful either - something along the lines of required hardware not being detected. But let's start at the beginning, with the hardware:

- Motherboard: Supermicro X8DTi

- Processors: 2x Intel Xeon E5520

- Memory: 6x 2GB Crucial DDR3-1333 ECC Unbuffered SDRAM

- Hard Drive: 1x SATA Hard drive (Western Digital branded, 640GB - Caviar Black series)

- Optical Drive: 1x SATA Optical Drive (LG branded)

- USB Mouse/USB Keyboard/LCD monitor plugged in to VGA-OUT on motherboard

Basically, my procedure was:

1. Connect all hardware, make sure it passes POST and verify everything is seen in BIOS

2. Put DVD containing an image of VMWare ESXi 3.5 in to DVD drive and watch it load, then fail to install.

Interestingly, the failure message states I should write down hardware information before asking for help, but it doesn't seem like something worth posting here... it incorrectly detected my motherboard, and also generated a serial number of something like "1234567890". I thought posting the actual hardware in the box might help more.

Any assistance you could give would be greatly appreciated.

Tags (4)
0 Kudos
23 Replies
Dave_Mishchenko
Immortal
Immortal

Welcome to the VMware Community forums. Take a look at the first item on this page - http://www.vm-help.com/esx/esx3i/ESXi_35_common_issues.php.

0 Kudos
ShanVMLand
Expert
Expert

Looks like that your motherboard is not supported by ESX 3i. However, please check this whitebox list. I am not seeing your mother board model there.

Shan

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
0 Kudos
FrankGrey
Contributor
Contributor

Well, I've gottn a little further, but now I'm hitting another issue.

Because of potential compatibility issues etc. I gave ESXi 4.0 a shot. It made it further than 3.5 update 4 on the installation, but it's still not able to complete. What happens now is that I hang during the very last install portion - on the IPMI driver. This is odd, because I don't believe the X8DTi even has IPMI.

In doing a search, all I could find is that some people had somehow bypassed the installation of this component, but I have no idea how they might have done that. Any thoughts?

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Moved to the ESXi 4 forum.

Is your storage and network controllers within your Motherboard on the HCL? Is the host on the HCL? If not then you may just have an unsupported configuration. ESXi has a pretty stringent set of hardware requirements. I would also verify that the firmware for the box and IO devices is at the proper levels for ESX.


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst[/url]
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Datto
Expert
Expert

I have two ESXi white boxes (ESXi 3.5 and 4.0) that require a keystroke to be hit on the way up after a reboot -- othewise the reboot gets stuck and won't proceed to the login prompt. I usually hit TAB or RIGHT-ALT with RIGHT-SHIFT. You might be facing the same thing -- try a reboot and when the boot stops, try hitting one of those two keystrokes and holding them down for 3-4 seconds from the actual console of the server to see if the boot will proceed ahead to the login prompt.

Datto

0 Kudos
FrankGrey
Contributor
Contributor

To answer your questions, I believe the answer to each is Yes. My motherboard has an Intel ICH10R chipset (I am not using the RAID functionality) which appears to be on the HCL for 3.5 U4 at least (which also did not install, and actually broke earlier) so it should be good for 4.0... and the network controllers are dual Intel 82576, which are on the HCL for 4.0.

My motherboard does not have any exotic components, it's a pretty vanilla feature set for a dual 55xx series Xeon setup really. I don't understand why I hang on an IPMI driver install either, my motherboard doesn't even have an IPMI port, I would think it would just be skipped.

@ Datto - I will try giving those keystrokes a shot, but I'm not even able to finish installing. As far as I can tell this should be about the last portion of the last installation screen that I just can't get past, so we'll see I guess.

Does it matter at all that I currently only have a single, SATA drive hooked up? I haven't seen anything stating a different storage method would be required or that a single drive couldn't be used, but I have to check.

0 Kudos
Datto
Expert
Expert

One other thing I'd try if the keystroke trick doesn't help is to change your SATA to ACHI type rather than IDE type and see if the install completes.

By the way, I have to do the keystroke trick during my install of ESXi as well as after the installation when the server is booted up or else the install won't complete or the server won't continue to the login prompt after installation.

Datto

0 Kudos
filbo
Enthusiast
Enthusiast

Because of potential compatibility issues etc. I gave ESXi 4.0 a shot. It made it further than 3.5 update 4 on the installation, but it's still not able to complete. What happens now is that I hang during the very last install portion - on the IPMI driver. This is odd, because I don't believe the X8DTi even has IPMI.

According to SuperMicro's motherboard matrix, there are 3 X8DTi models; the ones called "X8DTi-LN4F" and "X8DTi-F" have IPMI while just plain "X8DTi" doesn't. So it becomes fairly important to know your exact model name.

My SuperMicro has it very nicely stenciled on the motherboard, if you can't find it anywhere else. Also, mine was one that doesn't come with IPMI, but has it as an addon, which I've added. It goes into a special "IPMI slot" and the card is about the size of, um, a playing card cut in half along its length? About 3x1". Since there are three closely allied models where a major difference is IPMI, I would imagine IPMI is a similar add-on card for yours. You should see an empty slot for it if you don't have the card.

Also, BIOS setup is aware of it. I think it's in the standard BIOS rather than a separate BIOS (like you see from NIC and HBAs). I say "I think" because I don't feel like shutting down my ESX box right now to check ;-}

Whether or not you have IPMI hardware, something is apparently wrong with the IPMI driver. But it would be nice to start out knowing whether we're looking for a problem initializing hardware -- or a problem when it can't find any hardware.

If you can't get past this, boot up some random (recent) Linux distro, collect output of:

   

  1. dmidecode

   

  1. dmidecode -u

   

  1. biosdecode

   

  1. lspci -vvnn

   

  1. lshw -sanitize -numeric

   

  1. hwinfo

Don't worry about any utilities that are missing, I'm going for overkill / multi-distro coverage. Post as a single attachment (either one long text log or a tarball of this > this.out; that > that.out outputs).

In doing a search, all I could find is that some people had somehow bypassed the installation of this component, but I have no idea how they might have done that. Any thoughts?

It's easy in ESX Classic, you just run service ipmi disable. You can do something similar in the "unsupported" shell interface in ESXi; but I don't see how you get there from here when you're dying during installation. If you were booting a USB stick I would say to just boot it on another system, disable IPMI startup, shutdown, carry back to this system.

>Bela<

0 Kudos
PVerijke
Contributor
Contributor

I have the same motherboard on my new server. X8DTi.

This server was delivered with ESXi preinstalled, and obviously the NIC was not working and it got stuck during boot on the ipmi_si_drv module.

After upgrading to ESXi 4.0 the NIC works as this is on the HCL. The ipmi_si_drv boot problem stay's.

Anyway, It hangs on that module for over one hour and then it continiues.

I know this was an issue on servers of HP and previous Supermicro motherboards, and this was solved with a BIOS upgrade.

I guess for now we have to live with a very slow post until supermicro comes out with a BIOS fix.

P.S. I use an internal USB stick to boot from.

0 Kudos
filbo
Enthusiast
Enthusiast

This server was delivered with ESXi preinstalled

Who is delivering "preinstalled" ESXi with incompatible NIC and unable to even boot past IPMI module load?!?

Anyway, It hangs on that module for over one hour and then it continiues.

I guess for now we have to live with a very slow post until supermicro comes out with a BIOS fix.

I'm pretty sure if you add the actual SuperMicro IPMI module, this problem will go away. If you get the version with KVM capability then you get a significant benefit ("lights out" remote keyboard/mouse/video access). I have one of those in my main test box and it's a real boon. Unlike most of the OEM KVM solutions, this one actually works. I thought it was well worth the price of around $100.

But throwing hardware at a software problem is irksome. How can you fix this in software?

I think there must be more graceful ways to do this, but since I don't know them, here's the brain surgery way... Actually four ways since 3.5 and 4.0 differ, and ESXi differs from ESX, on this issue.

To disable the IPMI driver on ESXi 3.5 (booting from USB key):

#. shut down the ESXi host.

#. remove the USB key.

#. take it to a host which can mount FAT filesystems read/write and which is happy with tar format (most Linux boxes, or Windows with Cygwin or other Unix emulation environment).

#. you will be modifying environ.tgz from the bootbank. The bootbank is usually partition 5 (according to Linux fdisk), but may be partition 6 depending on system history. Find environ.tgz on one or both of these banks.

#. extract it:

# mkdir -p /root/esxi-environ
# cd /root/esxi-environ
# tar xzf /path-to-mounted-bootbank/environ.tgz

#. edit the file sbin/config:

# sed -i '/loadIPMI = /s/1/(disable IPMI)/' sbin/config

#. repack the environ tarball:

# tar czf /path-to-mounted-bootbank/environ.tgz *

#. repeat for environ.tgz on the other bootbank, if both exist.

#. unmount the USB key.

#. carry it back to the ESXi host, boot up.

To disable the IPMI driver on ESX Classic 3.5:

#. login as root on the service console.

#. disable the driver with:

# chkconfig ipmi off

#. if HP Insight Manager is installed, you may need to remove HP's IPMI driver:

# rpm -e hp-OpenIPMI

#. other management agents may explicitly start IPMI by running e.g. service ipmi start; if so, you may need to make additional changes to stop that. One way would be to add an "exit 0" to the top of /etc/init.d/ipmi.

#. IPMI will be disabled on the next reboot.

To disable the IPMI driver on ESXi 4.0 (booting from USB key):

#. turn on the "sticky bit" on /etc/vmware/init/init.d/72.ipmi:

# cd /etc/vmware/init/init.d
# chmod +t 72.ipmi

#. edit the file to disable IPMI:

# sed -i '/Exec/s/^/return ${SUCCESS}  # disable IPMI\n\n/' 72.ipmi

#. IPMI will be disabled after the next graceful shutdown and reboot.

To disable the IPMI driver on ESX Classic 4.0:

#. login as root on the service console.

#. edit /etc/vmware/init/init.d/72.ipmi to disable IPMI:

# cd /etc/vmware/init/init.d
# sed -i '/Exec/s/^/return ${SUCCESS}  # disable IPMI\n\n/' 72.ipmi

#. IPMI will be disabled after the next reboot.

If you are booting ESXi from a hard disk (ESXi Installable), you need to do the same edits as in the ESXi instructions above, but the file locations may differ. I don't have an Installable setup here to check the details.

Disabling IPMI has the following effects:

#. OEM management agents running on the host will no longer get IPMI-based information like CPU temperatures, fan speeds, power alerts.

#. Remote management consoles may or may not lose this information. They can get IPMI readings from daemons running on the host, but they can also talk to a host's IPMI BMC (Baseboard Management Controller) directly over TCP/IP, bypassing the host OS, in which case they can still get this information. All of the management consoles I'm familiar with use agents on the host, but they might also be programmed to fall back to direct IPMI-over-TCP access.

#. VI Client "Health Status" information will be reduced (losing the same CPU temperatures, fan speeds, event alerts, etc.).

#. On some hosts, kernel threads named "kipmi0" and possibly "kipmi1" will no longer be created. These kernel threads can consume significant CPU on some hosts (less than it looks in top, since they run at lowest possible priority and will be preempted by anything else that needs to run).

Hope this helps,

>Bela<

Message was edited by: filbo: added "effects of disabling IPMI" topic.

0 Kudos
FrankGrey
Contributor
Contributor

The solution ended up being that my SATA type had to be set to AHCI rather than Enhanced IDE in BIOS. I have no idea why, but as long as it works I'm happy.

For future reference, please note:

My installation still takes 90+ minutes because it hangs on installing IPMI drivers, but with AHCI enabled it does at some point complete. After that, the first boot again takes 90+ minutes. However, once booted I was able to edit my kernel/boot options to disable loading the IPMI driver via the GUI and now boots take an appropriate amount of time.

To me this indicates that there is still a bug with ESXi 4.0, but at least this is a temporary workaround

Note: Another posted had asked which motherboard I am using, as "X8DTi" is the base model and there are two models with extensions on to that... I am using the base model, no extensions.

Thanks to everyone for their help!

0 Kudos
filbo
Enthusiast
Enthusiast

The solution ended up being that my SATA type had to be set to AHCI rather than Enhanced IDE in BIOS. I have no idea why, but as long as it works I'm happy.

Ah, sounds like you owe a "correct answer" to Datto.

By changing that BIOS setting, you influence ESXi to load a different driver for the controller.

In "Enhanced IDE" mode, the kernel loads an "ata" family driver such as "ata_piix". Drives connected through that are not seen by the kernel as SCSI devices. vmkernel uses of direct IDE devices are fairly limited, for instance you cannot have a VMFS on a device operated by an IDE (ata) driver. I don't think that should make your install hang, you should have gotten some sort of error message, but either way it was probably not going to result in a useful installation.

In "AHCI" mode, the kernel loads a "sata" family driver like "sata_svw" or (most likely) "ahci". Drives connected through those are seen by the kernel as SCSI devices. In that configuration, the very same device can be used for any vmkernel purpose (including VMFS).

My installation still takes 90+ minutes because it hangs on installing IPMI drivers, but with AHCI enabled it does at some point complete. After that, the first boot again takes 90+ minutes. However, once booted I was able to edit my kernel/boot options to disable loading the IPMI driver via the GUI and now boots take an appropriate amount of time.

Would you please detail how you disabled IPMI via GUI? I should have that info so I can simplify the litany in my reply.

To me this indicates that there is still a bug with ESXi 4.0, but at least this is a temporary workaround

Agreed. As one of the maintainers of the ESX IPMI drivers, I have no doubt they are buggy and can use all the help they can get. (Part of the problem is that vendor implementations of IPMI vary tremendously...)

There is also probably a SuperMicro BIOS bug. If you don't mind pursuing this a bit, would you send me (privately) output of esxcfg-info (run from ESXi "unsupported" shell), and -- if possible -- output of dmidecode; dmidecode -u from a modern Linux booted on the same hardware? Any 2.6-based LiveCD should probably do. This output (and possibly further tests I'll ask you to run) should enable me to harden the ESX IPMI driver against the problem, while also, hopefully, being able to inform SuperMicro what their bug is so they will repair their BIOS.

>Bela<

0 Kudos
Datto
Expert
Expert

&gt;&gt; One other thing I'd try if the keystroke trick doesn't help is to change your SATA to ACHI type rather than IDE type and see if the install completes.

&gt;&gt; The solution ended up being that my SATA type had to be set to AHCI rather than Enhanced IDE in BIOS.

Frank -- Glad I could help.

Datto

0 Kudos
PVerijke
Contributor
Contributor

Hi Filbo,

This server was delivered with ESXi preinstalled

Who is delivering "preinstalled" ESXi with incompatible NIC and unable to even boot past IPMI module load?!?

I Totally agree.


If they would have at least had installed ESXi3.5 with the latest patches, then the NIC would have worked.
Anyway, not a big issue as I installed ESXi 4.0 now, but would have liked to avoid the extra work.

bq. >
Anyway, It hangs on that module for over one hour and then it continiues.
I guess for now we have to live with a very slow post until supermicro comes out with a BIOS fix.

I'm pretty sure if you add the actual SuperMicro IPMI module, this problem will go away. If you get the version with KVM capability then you get a significant benefit ("lights out" remote keyboard/mouse/video access). I have one of those in my main test box and it's a real boon. Unlike most of the OEM KVM solutions, this one actually works. I thought it was well worth the price of around $100.<div>That is nice to know, as I never tried the IPMI module of supermicro.</div>

I used the iLo a lot of HP, and this is not that impressive I think.


The problem with all these KVM solutions is that very often the mouse is not synchronised well at all, and it is always very slow.


Anyway, my current motherboard is without the IPMI, and I think you cannot add it lateron.


There are four types of this board (if I recall it correctly), where two of them are with IPMI.


Anyway, worth to try it next time around.



bq. >


To disable the IPMI driver on ESXi 4.0 (booting from USB key):

  1. turn on the "sticky bit" on /etc/vmware/init/init.d/72.ipmi:

    # cd /etc/vmware/init/init.d
    # chmod +t 72.ipmi
    

  1. edit the file to disable IPMI:

    # sed -i '/Exec/s/^/return ${SUCCESS}  # disable IPMI\n\n/' 72.ipmi
    

  1. IPMI will be disabled after the next graceful shutdown and reboot.</div><div>I tried this, and it works like a charm, thanks for that.


Preferrably i would have had a bios update, as now i could have limited hardware monitoring, but it is better then having to wait for a long boot.


0 Kudos
filbo
Enthusiast
Enthusiast

I used the iLo a lot of HP, and this is not that impressive I think.

The IPMI unit on my SuperMicro has less capabilities than HP iLO. However I am very satisfied with the capabilities it does have, especially the KVM.

The problem with all these KVM solutions is that very often the mouse is not synchronised well at all, and it is always very slow.

I can't really address your concern about the mouse -- the 'M' part of KVM is never very important to me. Far more important to be able to intercept BIOS startup, get into BIOS setup, interact with GRUB, and get a console login to my booted up ESX or interact with DCUI of my booted up ESXi. I have also done most of these things through the SoL (Serial-over-LAN) capability of the SuperMicro IPMI card, so technically the KVM isn't adding that much. But it's much nicer to use and there was hardly any price difference between the KVM and no-KVM versions of the IPMI daughtercard.

To disable the IPMI driver on ESXi 4.0 (booting from USB key):

#. turn on the "sticky bit" on /etc/vmware/init/init.d/72.ipmi:

# cd /etc/vmware/init/init.d
> > # chmod +t 72.ipmi

#. edit the file to disable IPMI:

# sed -i '/Exec/s/^/return ${SUCCESS}  # disable IPMI\n\n/' 72.ipmi

#. IPMI will be disabled after the next graceful shutdown and reboot.

I tried this, and it works like a charm, thanks for that.

And thank you for confirming 1/4 of my recipe Smiley Happy

Preferrably i would have had a bios update, as now i could have limited hardware monitoring, but it is better then having to wait for a long boot.

You have not in any way limited your hardware monitoring. Your system doesn't have an IPMI BMC, therefore all the IPMI driver is supposed to do is load, look around, determine it has nothing to do, and exit. By shunting around the load statement in /etc/vmware/init/init.d/72.ipmi, all you're doing is pre-answering its question of "do I have anything to do here?".

Also, what is the effect of disabling the IPMI driver in a system that does have an IPMI BMC? It means that the host OS (vmnix Linux for ESX Classic, vmkernel for ESXi) IPMI driver will not be talking to the BMC. Therefore the CIM broker (Pegasus or sfcbd) will not have health info to offer, VC will not collect it, and VI Client will not display it. None of this stops the BMC from doing its stuff. If you connect to it with any other tool -- ipmitool over the LAN, ssh into the BMC's control console, https to the BMC's web page -- you can still access health info. You're just knocking it out of ESX & VC's view.

>Bela<

0 Kudos
Compitem
Contributor
Contributor

I have done it this way, and at the next reboot the IPMI is off.

But if i start again, the file "72.ipmi" seems to be restored.

I alter the 72.ipmi again, and restart but the IPMI are still load and the 72.ipmi also restored to the old value.

I alter it again and restart, then the IPMI is off again.

Now my question ist, how can i delete this three driver ipmi files? Where i can find it? On ESXi 3.5 there was an archive called "binmod.tgz".

But where are they in ESXi 4?

I have an IBM 326m... Dual Opteron 8GB RAM

0 Kudos
Dave_Mishchenko
Immortal
Immortal

If you enable the sticky bit on the file - i.e chmod +t 72.ipmi then ESXi will back up the file to the system state then you don't have to worry about the system firmware files (which would be replaced the next time you patch your host).

0 Kudos
Compitem
Contributor
Contributor

This is the strange behavior i have. I do the following on the "72.impi" file:

"chmod +t 72.ipmi"

and alter the file with:

"sed -i '/Exec/s/^/return $ # disable IPMI\n\n/' 72.ipmi"

and reboot the host with my VI-Client. After booting i look again in the "72.ipmi", seems to be all ok.

Then i reboot again, the same way with my VI-Client. Look again, the file looks like as it has never been altered.

What do do you mean exactly with "... next time you patch your host..."? How can i, as you described, patch my host

at once to make my changes permanent.

0 Kudos
Compitem
Contributor
Contributor

Considering that ESX4i is brandnew, i have not found a patch, as you described (... next time you patch...) do i need a "Dummy File" to make it permanent...?

0 Kudos