Glad to hear that issues with the Intel 1000's are being experienced by other users out there... proves I am not going insane.
Here is my story....
Lab environment: (Being used to test migration to vSphere for our production environment.)
3 Hosts.. AMD 6000, 8GB RAM 40GB SATA HDD, 1 onboard nVidia nForce Adapter, 2x Intel Pro 1000 MT Dual Port Adapters (Giving us 5 in total).... Connected these to a Linksys SRW2024 switch (Set as a dumb switch with no VLANS setup just to ensure noone throws a network config issue response).
Install of ESXi 4.0 goes well, all adapters are detected and show as active.... nVidia card nominated as mgt adapter (Default).
Log in via VI Client (nVidia card) no dramas and set up the networking based on a standard design. (nVidia card as mgt, 1 intel for iSCSI, 1 intel for vMotion and 2 intel for VM network).
Now the fun begins....
No matter how I configure the host I cannot get any VMKernel services to work on the intel's... I can ping the configured IP, I can telnet the ports but it just does not want to give me the services (ie connect to iSCSI or connect via VI client... I suspect vmotion would be dead as well but cannot test as I do not have SAN connection because I cannot get the intel card for iSCSI to work).....
Heres the kicker....
Just for fun I built these boxes using ESXi 3.5 and wouldn't you know it... everything works fine.... I ran the upgrade thinking I might get some success but as soon as the upgrade concludes I loose all VMkernel services running on the Intels... I suspect VMware have some driver work to do.
As this loosly tied into work for the production environment (which is fully covered VMware support) I logged a call but suspect they are just stalling me... ie "Run this useless unproductive test and send the logs" followed by a "We cannot contact you.. please call us".... "Have you tried this really obvious test"..... "Can you reinstall to buy us another day of not working on the problem".
All I want is a "Hmm yep definately looks like we have some crappy drivers" or a "We acknowlegde this is an issue with the product and intend to have it recified with a future release/update/patch" rather than the constant lack of conviction/action on the part of VMware........ may have to consider moving away from VMware if the service does not improve.
Same issue with Intel DG35EC, onboard Intel 82566DC network adapter and ESX 4.0 build 164009. ESXi build 164009 do not want to install on this hardware ("Can`t enable additional CPU" or something) and now i`m downloading lastest ESXi build to try it.
Plus, a have additional shutdown issue with both ESX and ESXi, it described here - http://communities.vmware.com/thread/213651.
I have this same problem with ESXi 4.0 and an Intel Gigabit CT Desktop Adapter.
Are you sure? ESXi, not ESX?
Upon further testing it seems to be a motherboard driver issue. The card works in ESXi 3.5 without issue and the card works in a different model computer with 4.0 without issue. Some people have had luck disabling ACPI in the BIOS who've had similar issues, but my BIOS does not have that option to disable.
1 person found this helpful
Same issue with an intel motherboard, Resolved with: esxcfg-module -s IntMode=0,0,0,0 e1000e
Hope that helps.
Will try that as soon as I get a chance. Thanks for posting this!
I used "esxcfg-module -s IntMode=1 e1000e" and got my Intel Gigabit CT Desktop Adapter working under ESXi 4.0 on a Dell Optiplex 740. Thank you, AramiS_1970!
I just tried ESX 4 U1, the problem is fixed.
In addition to this problem, Synchronizing SCSI cache issue on many Intel motherboards are fixed as well.
How di d you fixed the Synchronizing SCSI cache issue ?
I didn't fix the issue, just install ESX 4 U1.
Both issues mentioned above are fixed in U1.
I have the same problem.
Ever since I upgraded to ESX4 and then ESX4u1 from ESX3.5 my e1000 cards have not been working properly.
It started with the e1000 after the ESX3.5 to 4 upgrade where the connection would keep dropping out. I used this card for my iSCSI VMkernel port. I swapped the port to my e1000 card and that resolved the drop out issue. Since the upgrade to ESX4u1, both the e1000 and e100e experience drop outs. I have applied the following esxcfg-module -s IntMode=1 e1000e to the iSCSI NIC and it resolved the connection lost error but iSCSI performance seems slow.
The e1000 however continue to loose connections, and I tried the same apply the following esxcfg-module -s IntMode=1 e1000 to it but now i have lost all connectivity.
At this rate I'm going to have to go back to ESX3.5 for stability because since the driver changes in ESX4 and ESX4u1, my test environment is unusable.
My older NForce4 motherboards in my home lab I had to switch from Intel 1000 MT PCI NICs and Intel 1000 CT PCI-e NICs switched to Inntel 1000 PT PCI-e NICs in order to get connectivity back under ESX 4.0 GA release. The 1000 PT NICs seem to work fine with the NForce4 chipset motherboards but the CT and the MT Intel NICs wouldn't work at all in that same ESX 4.0 GA box (the CT and MT NICs kept thinking there wasn't a cable connected to the NIC).
Don't know whether that would help you but I thought I'd pass it along anyhow.
The Dell Optiplex 740 I have uses the NForce4 chipset, and I had problems with both the MT and CT cards. Changing the setting for the e1000e driver made the CT card work, so you may have the same luck if you try to use it again. I haven't tested the MT card as I put it in another system.
Thanks for the tip scratchfury79 -- I may soon have one of the other NForce4 boxes around here doing ESX 4.0 duties and will use Intel NICs in it to try that tip out.