VMware Cloud Community
MarcBouchard
Enthusiast
Enthusiast

Intel PRO/1000 NICs not working

New installation of ESX4 in our lab on "unsupported" hardware. The system has 3 NICs. 2 Intel PRO/1000 and an onboard Broadcom NIC. At installation I noticed something strange but figured I'd run thru it anyway. To summarize the issue, only one NIC works (vmnic2, one of the Intel adapters). On the physical switch, I see the link is up on all 3, but ESX says two of them are down

I need those extra adapters for iSCSI support etc...

Any help would be much appreciated!

0 Kudos
59 Replies
Texiwill
Leadership
Leadership

Hello,

Moved to the vSphere Networking Forum.

Have you double checked the ESX HCL for support for this specific NIC by version #?

Also, have you ensured that the Mobo and NICs have the recommended (perhaps latest) firmware/BIOS?


Best regards,

Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
rago60
Contributor
Contributor

Hello Texiwill,

I've installed ESXi 4.0 yesterday. All nics are working fine - no problems.

What do you meen with "Moved to the vSphere Networking Forum"? Is there a workaround for this problem?

My second ESX 4.0 Server with the identical hardware remains in the failure. I'm just looking for the differencies in networking. If I can fix the problem, I will tell you.

Rago60

0 Kudos
Texiwill
Leadership
Leadership

Hello,

There are multiple vSphere forums. Networking is one of them.

If you have identical hardware and one works and one does not I would always double check the hardware as well as an BIOS/Firmware revisions within the hardware. Compare those between the hosts.


Best regards,

Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, Virtualization Practice Analyst[/url]
Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
dilidolo
Enthusiast
Enthusiast

I have Intel DG33BU board with same issue.

From what I see, the issue is with Intel BIOS IRQ sharing. In ESX, libatadata and vmnic2 share the same IRQ which caused additional vmnics not being initialied. I reloaded ESXi the problem went away becaused there was no COS and IRQ allocation is different.

My server has onboard 82566, Inte Pro PT and Intel pro GT. GT always works and all 3 nics are seen by ESX but the 2 on pci-e bus always shows disconnected.

This is issue is related to COS which is RHEL 5. I've searched and found exact same issue people are seeing with RHEL 5.

0 Kudos
MarcBouchard
Enthusiast
Enthusiast

Nice to see I wasn't the only one with this kind of issue. The fact that it works with ESXi points to the COS as mentionned, glad you found some other references to that issue!

0 Kudos
MrShmen
Contributor
Contributor

Glad to hear that issues with the Intel 1000's are being experienced by other users out there... proves I am not going insane.

Here is my story....

Lab environment: (Being used to test migration to vSphere for our production environment.)

3 Hosts.. AMD 6000, 8GB RAM 40GB SATA HDD, 1 onboard nVidia nForce Adapter, 2x Intel Pro 1000 MT Dual Port Adapters (Giving us 5 in total).... Connected these to a Linksys SRW2024 switch (Set as a dumb switch with no VLANS setup just to ensure noone throws a network config issue response).

Install of ESXi 4.0 goes well, all adapters are detected and show as active.... nVidia card nominated as mgt adapter (Default).

Log in via VI Client (nVidia card) no dramas and set up the networking based on a standard design. (nVidia card as mgt, 1 intel for iSCSI, 1 intel for vMotion and 2 intel for VM network).

Now the fun begins....

No matter how I configure the host I cannot get any VMKernel services to work on the intel's... I can ping the configured IP, I can telnet the ports but it just does not want to give me the services (ie connect to iSCSI or connect via VI client... I suspect vmotion would be dead as well but cannot test as I do not have SAN connection because I cannot get the intel card for iSCSI to work).....

Heres the kicker....

Just for fun I built these boxes using ESXi 3.5 and wouldn't you know it... everything works fine.... I ran the upgrade thinking I might get some success but as soon as the upgrade concludes I loose all VMkernel services running on the Intels... I suspect VMware have some driver work to do.

As this loosly tied into work for the production environment (which is fully covered VMware support) I logged a call but suspect they are just stalling me... ie "Run this useless unproductive test and send the logs" followed by a "We cannot contact you.. please call us".... "Have you tried this really obvious test"..... "Can you reinstall to buy us another day of not working on the problem".

All I want is a "Hmm yep definately looks like we have some crappy drivers" or a "We acknowlegde this is an issue with the product and intend to have it recified with a future release/update/patch" rather than the constant lack of conviction/action on the part of VMware........ may have to consider moving away from VMware if the service does not improve.

0 Kudos
3apa3a_b_ta3e
Enthusiast
Enthusiast

Same issue with Intel DG35EC, onboard Intel 82566DC network adapter and ESX 4.0 build 164009. ESXi build 164009 do not want to install on this hardware ("Can`t enable additional CPU" or something) and now i`m downloading lastest ESXi build to try it.

Plus, a have additional shutdown issue with both ESX and ESXi, it described here - http://communities.vmware.com/thread/213651.

0 Kudos
scratchfury79
Contributor
Contributor

I have this same problem with ESXi 4.0 and an Intel Gigabit CT Desktop Adapter.

0 Kudos
3apa3a_b_ta3e
Enthusiast
Enthusiast

Are you sure? ESXi, not ESX?

0 Kudos
scratchfury79
Contributor
Contributor

Upon further testing it seems to be a motherboard driver issue. The card works in ESXi 3.5 without issue and the card works in a different model computer with 4.0 without issue. Some people have had luck disabling ACPI in the BIOS who've had similar issues, but my BIOS does not have that option to disable.

0 Kudos
AramiS_1970
Contributor
Contributor

Same issue with an intel motherboard, Resolved with: esxcfg-module -s IntMode=0,0,0,0 e1000e

And reboot.

Hope that helps.

MarcBouchard
Enthusiast
Enthusiast

Will try that as soon as I get a chance. Thanks for posting this!

0 Kudos
scratchfury79
Contributor
Contributor

I used "esxcfg-module -s IntMode=1 e1000e" and got my Intel Gigabit CT Desktop Adapter working under ESXi 4.0 on a Dell Optiplex 740. Thank you, AramiS_1970!

0 Kudos
dilidolo
Enthusiast
Enthusiast

I just tried ESX 4 U1, the problem is fixed.

In addition to this problem, Synchronizing SCSI cache issue on many Intel motherboards are fixed as well.

0 Kudos
AramiS_1970
Contributor
Contributor

How di d you fixed the Synchronizing SCSI cache issue ?

Cheers

Aramis

0 Kudos
dilidolo
Enthusiast
Enthusiast

I didn't fix the issue, just install ESX 4 U1.

Both issues mentioned above are fixed in U1.

0 Kudos
Witek_Rolka
Contributor
Contributor

Hello

I have the same problem.

Ever since I upgraded to ESX4 and then ESX4u1 from ESX3.5 my e1000 cards have not been working properly.

It started with the e1000 after the ESX3.5 to 4 upgrade where the connection would keep dropping out. I used this card for my iSCSI VMkernel port. I swapped the port to my e1000 card and that resolved the drop out issue. Since the upgrade to ESX4u1, both the e1000 and e100e experience drop outs. I have applied the following esxcfg-module -s IntMode=1 e1000e to the iSCSI NIC and it resolved the connection lost error but iSCSI performance seems slow.

The e1000 however continue to loose connections, and I tried the same apply the following esxcfg-module -s IntMode=1 e1000 to it but now i have lost all connectivity.

At this rate I'm going to have to go back to ESX3.5 for stability because since the driver changes in ESX4 and ESX4u1, my test environment is unusable.

Cheers.

Witek

0 Kudos
Datto
Expert
Expert

My older NForce4 motherboards in my home lab I had to switch from Intel 1000 MT PCI NICs and Intel 1000 CT PCI-e NICs switched to Inntel 1000 PT PCI-e NICs in order to get connectivity back under ESX 4.0 GA release. The 1000 PT NICs seem to work fine with the NForce4 chipset motherboards but the CT and the MT Intel NICs wouldn't work at all in that same ESX 4.0 GA box (the CT and MT NICs kept thinking there wasn't a cable connected to the NIC).

Don't know whether that would help you but I thought I'd pass it along anyhow.

Datto

0 Kudos
scratchfury79
Contributor
Contributor

Datto,

The Dell Optiplex 740 I have uses the NForce4 chipset, and I had problems with both the MT and CT cards. Changing the setting for the e1000e driver made the CT card work, so you may have the same luck if you try to use it again. I haven't tested the MT card as I put it in another system.

0 Kudos
Datto
Expert
Expert

Thanks for the tip scratchfury79 -- I may soon have one of the other NForce4 boxes around here doing ESX 4.0 duties and will use Intel NICs in it to try that tip out.

Datto

0 Kudos