Did you use the onboard Raid Controller?
Look into your bios if the Raid Array is in the list of boot devices. Sometimes you have to set it up manually.
IMHO the onboard Controller is software based (for RAÌD functionallity in Windows you need a software driver) which is not supported in ESX.
If a driver is loaded during setuup and you can install ESX then ESX can see the two harddisks as single drives but not as RAID 1.
Try to disable RAID functionality and install ESXi on the single drive. Then you can use the second drive for additional VMFS Store ...
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Onboard raid like the ICH9R and ICH10R are not supported. ESXi will only see a bunch of separate disks. If you require raid you need to use a raid controller that's on the HCL, like the Dell Perc 5/i or HP P400 or so.
Hehee, it seems that none of you were right. Boot sequence is set up as it should be, it had no effect on this. But, I managed to startup the ESXi with mirrored RAID, after I did a rebuild in RAID setup. Now it's working!
are you sure that the raid is actually working - i.e remove one disk and see if your VMFS is still intact.. then try the other disk..
Damn, you got it. Another disk has no MBR, or it is empty. So it is not possible to get mirror raid working? Raid is built-in in motherboard, no sepparate card in computer.
As I said, most "consumer motherboard" onboard software-raid controllers are not supported. Use a hardware-raid controller that is on the HCL.
Damn it, now when I try to attach another disk, that I would have 2 different datastore, the system won't start. Again, "BOOT FAILURE, INSERT DISKETTE....". Only with one disk, it will start up. Same problem I had, when I tried to install ESXi with 2 hard disks inserted, witout raid. Even if system installation was complited as succsesful, it didn't boot. In bios, I have no option to choose, which sata disk would be the bootable one.
i have had nothing but lockups when trying white box raid on an esxi 4.0 system.
I would only use whitebox raid for a backend nfs server, and only after stress testing the crap out of it with iometer for a few days.
I would recommend buying a Dell PERC 5/i raid card with an included cache battery on ebay. They are cheap and ESXi support disk status monitoring out of the box for this card (It's based on LSI).
I face it problem too... But I think I found a workaround...
I got success 3 times with my own servers (DL360G5, DL180G5 and DL350G5).
1) Create an ARRAY (check if the array is OK);
2) Remove Second Disk;
3) Install ESXi 4;
4) Reboot and check if everything is OK with MBR and ESXi;
5) Turn Off the server;
6) Reinsert the second Disk;
7) Turn on the server and go to RAID Setup;
8) Rebuild the Array.
Done!!! (you can remove the first disk or second disk to check/test the RAID)
I will do it on my private server (ASUS M4A78 onboard raid) and will tell you if it can be done too...
If you have success (like me) with it workaround, please, tell us...
Even if it does work, the on-board presumably has no BBWC so write performance will be horrid. Another vote for a Perc/5i here.
I think... the point is: Is possible or not to install ESXi 4 under onboard or offboard system?
I found a workaround to get ESXi 4 running on my HP servers... now everyone can do that...
Performance is another thread...
I would suspect that the method creates only a point-in-time copy. Create some machines, remove the first disk and probably they will be gone.