Hi,
regards,
Raghavendrachari
Hi Please Check your Primary boot drive in the BIOS under "Boot Configuration Features" then "Hard Disk Drives" to see how your disk is set.
I got around it by setting my primary boot disk to "SATA" instead of "ACHI".
What type of blade server is it?
Hi,
it is a Compact PCI blade server.
regards,
Raghavendrachari
Hi,
We all tried with all the avilable options what you suggested, but we are facing same problem.
regards,
Raghavendrachari
HI,
It is CPCI 7203 blade server having single Intel Core i7 Uniprocessor .
regards,
Raghavendrachari
I'm having the same problem here on a whitebox system on a Huron River Intel platform based on QM67. I think the crux of the problem is that the boot options that worked and disabled ACPI for versions 4.1 and earlier, don't seem to work for ESXi 5.0. Can someone at VMWare provide this information or at least let us know what we need to do to prevent either the Installer or the Boot loader (boot.cfg option?????) from Initializing the ACPI functionality?
I am having this issue on a Shuttle SZ77R5 with 3770 I7 processor, any word on boot options to get around this?
is your hardware listed in HCL? i guess this is because of the unsupported drivers for acpi
Regards
Ameen Munaf
Hi ,
we upgrade BIOS with the new avilable version , then ESXi5.0 has installed successfully in our blade server.
regards,
Raghavendrachari
Tried latest BIOS on my shuttle box but still no go. Might be stuck with HyperV for my test lab for now.
No it isn't listed on hcl, was hoping for some work around options like those that were available in previous esxi versions.
Sent from my mobile
dasmiffs,
The SZ77R5 appears to support UEFI boot drives. Have a look at the Boot Tab in the BIOS and you'll see an area "UEFI Boot Drives". Make sure the drive/usb stick you are installing to has UEFI enabled.
Also see http://communities.vmware.com/thread/403231?start=15&tstart=0 for some more background on the issue.
Thanks, i will give that a go and update soon.
Sent from my mobile
Hi ,
For your kind information , please check wheather your hardware has listed on Vmware's HCL before installations , otherwise won't take steps.
regards,
Raghavendrachari
Raghavendrachari,
See http://communities.vmware.com/message/1840034#1840034
The above is an example of where a Intel Server board is listed on the HCL, but its second onboard NIC does not work because VMWare haven't yet provided driver support for the NIC (athough the above thread is talking about ESXi 4, the serverboard is listed on the HCL as compliant with v4 and v5). Most of this thread is about creating development or lab ESXi 5 Servers rather than production environments.
Most of the contributors here have tried things and they have or have not worked and we've provided feedback for the benefit of others.
My experience is that it is too risky to use a non-HCL system for ESXi production environments.
I've been looking at the Shuttle SZ77R5 - and this does look like a great portable lab/dev esxi environment - supports the Ivy Bridge 4 Core Processor; supports up to 32Gb DDR3 1600 Memory with normal DIMMs. Has one onboard gigabit nic (not sure if supported) - so might need to add an extra in. Also has a PCIe x16 - which for people who want to RAID1 this - they could use an Intel RS2WC040 PCIe8 RAID controller (circa USD$350).
The Shuttle SZ77R5 would now be my preference if you needed a portable solution (eg, working on customer sites) because of the small footprint - I'd use 4x8Gb DDR3 1600 Memory and raid1 with the above RAID card and use a intel pci-e dual port nic and use an external dvd to keep the power use as low as possible. I've always been a strong advocate of Intel whitebox architecture for servers and desktops, but after looking at the Shuttle, think this is the perfect lab/dev.
I'd also probably use VMWare Workstation 8 and runup ESXi 5 as a VM with "Virtualise Intel VT-x/EPT" enabled - could allocate 16Gb to ESXi.
I like the idea of the Intel DQ77KB as it could potentially be used as a lab in an itx case, but it is simply impractical because of the incompatible onboard nic; limit of 16Gb Memory, and the lack of expandability - only one PCI-e x4 slot - if u use this for the NIC, what would u use for the RAID?
Message was edited by: seetee
Sorry, got my threads crossed a bit - most of my comments above were in relation to http://communities.vmware.com/thread/403231?start=30&tstart=0