VMware Cloud Community
blanktree
Contributor
Contributor

Issues with ESXi 4.1 Adaptec;Intel NIC and Gigabyte MB

Hello All

I have just purchased parts to create a new ESXi server.

Specs - Gigabyte GA-870A-UD3 Rev 3. flash  date March 9th 2011 w/AMD 1090T processor.

Adaptec 2405 Raid Card now flashed up to 5.20 Build 18252

Intel Pro 1000 1000PC1E NIC

I can't find anything on the 4.1 HCL list but on the 4.0 list these are all compatible.

So prior to installing the Raid Card I successfully installed Esxi 4.1 and then updated to U1.  No issues.

My raid card came in so I installed it and now when I start it up the raid is located (Raid 1) but at the end of the Kernel load I get "No Compatible Network Adapater Found."

Everything works fine if I pull the raid card out.  So to me this seems like the Raid Card and the Nic are conflicting in some respect.

I spoke with Adaptec and they only said ensure it is flashed to current which it now is.

I have scanned the forum but could not find anything simular to my problem.

I then went to try a net new install and it gets to the screen just prior to selecting the drive and errors out and says "No Compatible Network Adapater Found."

I am not sure where else to turn as this equipment dating back to 4.0 says it is compatible, and I understand and have saw issues some have faced within 4.1 but to me it seems like something on the Raid Card is not happy.

Does anyone have any suggestions where to turn next?

Thanks everyone.

Reply
0 Kudos
10 Replies
Dave_Mishchenko
Immortal
Immortal

Do you have the same problem if you try a 4.0 install?   Do you have any spare slots to move the devices around?

Reply
0 Kudos
blanktree
Contributor
Contributor

Hello Dave

When I try to load ESX 4.0 I get an error - Unable to load lvmdriver which is indicative of " Esx will require a compatible network card"

The Raid controller can only be put in 1 slot due to motherboard architecture and I have 2 slots for the Nic could go in and have tried both.

Of note with the Raid card in I no longer can see the PXE boot from the NIC. Instead depending on my BIOS settings I can either see the option to boot from CD or if I shut off the onboard raid (which is best practice as ESX doesn't support the onboard software raid) then it goes immediately to the raid initialization screen.

So it appears the Raid card has taken a load over or perhaps a conflict at the bios level with the NIC.

This is my first time using the daptec 2405 so I am uncertain of its intended behavior at boot.

Thanks for your reply. 

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal

Is there an onboard NIC to disable?

Reply
0 Kudos
blanktree
Contributor
Contributor

Good evening Dave

I did have the onboard Nic Disabled.

As of this afternoon I solved the problem.

The Gigabyte GA-870A-UD3 motherboard has in fine print a note that The PCIEX1_1 and PCIEX1_2 slots share bandwidth with the PCIEX4 slot. When the PCIEX4 slot is populated with a X4 card , the PCIEX1_1 and PCIEX1_2 slots become unavailable.

So originally when I would put the Raid card in the PCI Express X16 slot running at x4 (PCIEX4) it would not recognize the card. So I had swapped the video card to this slot and then moved the Raid card to the #1 PCIEx16 slot.

The Video card is an x4 card and therefore the NIC card was unavailable to the bus even though functionally it appeared to be working. (lights blinking on card)

So after flashing things the other night I left everything as it was.

I bought another NIC that appeared supported (PCI) and put it in.  In the end it wasn't supported.  So I started yanking cards one by one and found out the issue with the raid in the nic didn't work.

I read the fine print in the manual online and thought I was out of the water at this point.

In changing cards around I put the Video card in where it was supposed to be and put the raid card back into the original slot.  Still no NIC but had raid, so next I moved the Nic to PCIEX1_1 slot and........

Everything works

So the fix is flash everything to current - although I think it is the MB flash that fixed it.  Then properly placed the Raid back where it should have been and properly placed the Nic card in the other PICEX1 slot.

So I have been running a VM since then and all looks good.

This one was a bit bizarre.

I thank you for your suggestions but this one in the end is a bit weird.  All of the pecies on the HCL are noted to work, however I hadn't found much where someone put them all together as one to make it work.

Thanks

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

There is no need to have a fancy video card. I would find an old PCI video card. Even though it works now future updates etc could cause issues. No point in loading up the PCI-e bus.

-- David -- VMware Communities Moderator
Reply
0 Kudos
blanktree
Contributor
Contributor

I happened to have this card from a warranty exchange on a Dell Inspiron.  In the end having an available card caused a whole bunch more work; but proved to be a good learning experience.

Available <> good :smileyshocked:

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

The good part is that you are up and running.

-- David -- VMware Communities Moderator
Reply
0 Kudos
Enteracloud
Contributor
Contributor

I think for as often as things work under ESX and ESXi we get spoiled by the complexities of supporting different hardware.  Although not exactly the same scenario, we spent several hours troubleshooting an issue with onboard broadcom nics using the same hardware resources as a usb port.  Lots of moving parts to keep track of behind the scenes.

Glad you got it fixed!

Enteracloud | Cloud Computing & Infrastructure Solutions http://www.enteracloud.com
Reply
0 Kudos
DSTAVERT
Immortal
Immortal

A good practice is to disable any onboard unused devices. Servers still come with Serial and Parallel ports and a bunch of USB ports. Since they require interrupts and will be serviced by the CPU they consume resources.

-- David -- VMware Communities Moderator
Reply
0 Kudos
blanktree
Contributor
Contributor

Basically the big thing was that by deisgn the Gigabyte board behaves in the fashion.

However when flashed to the most current level the behavior seems to change a bit. So I suspect the flashing of the MB and the Raid Card together changes the overall behavior of the Motherboard and its acceptance of PCIe cards. I should have flashed them seperately to see which one actually fixed it.

This has now been running for a few weeks and seems stable and hosting 5 VM's and just barely anything more than idles along.

In the end the timing was perfect as previous to this I was using VMware server 2.0 and some of my VM's were on a NAS. Early last week my NAS had dual drive failure (Raid 1) but I had already migrated everything over to the newly built server.

Thanks everyone for your suggestions.

Reply
0 Kudos