That's beause ESXi does not support on-board "fake-raid". AFAIK, only true hardware raid-controllers are supported. They might be sometimes part of motherboard, but this is not the case. "raid"-function of your motherboard is done by C602 chipset...
My tip: Attach even those two drives you want to use for system to LSI controller and create raid1 there (use expander if 8 ports of your LSI raid controller are not enough). But I think it is maybe "overkill". The drive where you install ESXi is not used normally for anything else, except reading boot-image...
After posting this had crossed my mind. Do you have any suggestions for models of expander cards as I believe the 9240-8i supports upto 64 physical drives (in jbod or 16 per volume)? I've never used them before.
Also how does this impact performance (using the expander cards)
Thanks.
I have seen expanders of various form and it is up to you to pick the suitable one. I have been using one in the form of "pcie-card" because I could mount it in an empty slot (apart from that, it does not use any pcie-pins). Something like this:
There are other types, i.e. one you can mount in empty 2.5" or 3.5"-drive position. And there are also special expanders for which mounting position is already prepared in server-case. Just pick the one with SFF-8087/SFF-8087 connectors, because that's what you have on your LSI raid controller.
I've been finding it hard to get the chenbro cards so ordered the Intel one instead. I should have put in the OP that I am planning on adding another 3 drives to the current 3 and also migrating another Raid5 array of 5 disks to the card so I guess I would need the expander either way. (total of 13 drives)