Do you have the same problem if you try a 4.0 install? Do you have any spare slots to move the devices around?
When I try to load ESX 4.0 I get an error - Unable to load lvmdriver which is indicative of " Esx will require a compatible network card"
The Raid controller can only be put in 1 slot due to motherboard architecture and I have 2 slots for the Nic could go in and have tried both.
Of note with the Raid card in I no longer can see the PXE boot from the NIC. Instead depending on my BIOS settings I can either see the option to boot from CD or if I shut off the onboard raid (which is best practice as ESX doesn't support the onboard software raid) then it goes immediately to the raid initialization screen.
So it appears the Raid card has taken a load over or perhaps a conflict at the bios level with the NIC.
This is my first time using the daptec 2405 so I am uncertain of its intended behavior at boot.
Thanks for your reply.
Is there an onboard NIC to disable?
Good evening Dave
I did have the onboard Nic Disabled.
As of this afternoon I solved the problem.
The Gigabyte GA-870A-UD3 motherboard has in fine print a note that The PCIEX1_1 and PCIEX1_2 slots share bandwidth with the PCIEX4 slot. When the PCIEX4 slot is populated with a X4 card , the PCIEX1_1 and PCIEX1_2 slots become unavailable.
So originally when I would put the Raid card in the PCI Express X16 slot running at x4 (PCIEX4) it would not recognize the card. So I had swapped the video card to this slot and then moved the Raid card to the #1 PCIEx16 slot.
The Video card is an x4 card and therefore the NIC card was unavailable to the bus even though functionally it appeared to be working. (lights blinking on card)
So after flashing things the other night I left everything as it was.
I bought another NIC that appeared supported (PCI) and put it in. In the end it wasn't supported. So I started yanking cards one by one and found out the issue with the raid in the nic didn't work.
I read the fine print in the manual online and thought I was out of the water at this point.
In changing cards around I put the Video card in where it was supposed to be and put the raid card back into the original slot. Still no NIC but had raid, so next I moved the Nic to PCIEX1_1 slot and........
So the fix is flash everything to current - although I think it is the MB flash that fixed it. Then properly placed the Raid back where it should have been and properly placed the Nic card in the other PICEX1 slot.
So I have been running a VM since then and all looks good.
This one was a bit bizarre.
I thank you for your suggestions but this one in the end is a bit weird. All of the pecies on the HCL are noted to work, however I hadn't found much where someone put them all together as one to make it work.
There is no need to have a fancy video card. I would find an old PCI video card. Even though it works now future updates etc could cause issues. No point in loading up the PCI-e bus.
I happened to have this card from a warranty exchange on a Dell Inspiron. In the end having an available card caused a whole bunch more work; but proved to be a good learning experience.
Available <> good
The good part is that you are up and running.
I think for as often as things work under ESX and ESXi we get spoiled by the complexities of supporting different hardware. Although not exactly the same scenario, we spent several hours troubleshooting an issue with onboard broadcom nics using the same hardware resources as a usb port. Lots of moving parts to keep track of behind the scenes.
Glad you got it fixed!
A good practice is to disable any onboard unused devices. Servers still come with Serial and Parallel ports and a bunch of USB ports. Since they require interrupts and will be serviced by the CPU they consume resources.
Basically the big thing was that by deisgn the Gigabyte board behaves in the fashion.
However when flashed to the most current level the behavior seems to change a bit. So I suspect the flashing of the MB and the Raid Card together changes the overall behavior of the Motherboard and its acceptance of PCIe cards. I should have flashed them seperately to see which one actually fixed it.
This has now been running for a few weeks and seems stable and hosting 5 VM's and just barely anything more than idles along.
In the end the timing was perfect as previous to this I was using VMware server 2.0 and some of my VM's were on a NAS. Early last week my NAS had dual drive failure (Raid 1) but I had already migrated everything over to the newly built server.
Thanks everyone for your suggestions.