Hello,
Our dual port gigabit network card only gets a 10mbit link in our server.
We have more of these card in other ESX server and there the do get a 1 gigabit link and work fine.
We tried almost everything with this server. Different cabels, different slots in the system, latest bios version, latest vmware esx version. clean install etc.
If we get get the network card to work a full speed we can finaly put the system in production.
Does anyone have a suggestions? Thanks for your help.
Cheers,
Harry
VMware-esx-3.5.0-82663
Linux XXXX.XXXX.XXX 2.4.21-47.0.1.ELvmnix #1 Tue Mar 18 18:07:00 PDT 2008 i686 i686 i386 GNU/Linux
Settings for vmnic0:
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: g
Cannot get message level: Function not implemented
Link detected: yes
Settings for vmnic1:
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: g
Cannot get message level: Function not implemented
Link detected: yes
Settings for vmnic2:
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: yes
Settings for vmnic3:
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 10Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
00:00.0 Host bridge: Intel Corporation: Unknown device 25c0 (rev 12)
00:02.0 PCI bridge: Intel Corporation: Unknown device 25e2 (rev 12)
00:03.0 PCI bridge: Intel Corporation: Unknown device 25e3 (rev 12)
00:04.0 PCI bridge: Intel Corporation: Unknown device 25f8 (rev 12)
00:05.0 PCI bridge: Intel Corporation: Unknown device 25e5 (rev 12)
00:06.0 PCI bridge: Intel Corporation: Unknown device 25f9 (rev 12)
00:07.0 PCI bridge: Intel Corporation: Unknown device 25e7 (rev 12)
00:08.0 System peripheral: Intel Corporation I/OAT DMA controller (1a38) (rev 12)
00:10.0 Host bridge: Intel Corporation: Unknown device 25f0 (rev 12)
00:10.1 Host bridge: Intel Corporation: Unknown device 25f0 (rev 12)
00:10.2 Host bridge: Intel Corporation: Unknown device 25f0 (rev 12)
00:11.0 Host bridge: Intel Corporation: Unknown device 25f1 (rev 12)
00:13.0 Host bridge: Intel Corporation: Unknown device 25f3 (rev 12)
00:15.0 Host bridge: Intel Corporation: Unknown device 25f5 (rev 12)
00:16.0 Host bridge: Intel Corporation: Unknown device 25f6 (rev 12)
00:1c.0 PCI bridge: Intel Corporation: Unknown device 2690 (rev 09)
00:1d.0 USB Controller: Intel Corporation: Unknown device 2688 (rev 09)
00:1d.1 USB Controller: Intel Corporation: Unknown device 2689 (rev 09)
00:1d.2 USB Controller: Intel Corporation: Unknown device 268a (rev 09)
00:1d.7 USB Controller: Intel Corporation: Unknown device 268c (rev 09)
00:1e.0 PCI bridge: Intel Corporation 82801BA/CA/DB/EB PCI Bridge (rev d9)
00:1f.0 ISA bridge: Intel Corporation: Unknown device 2670 (rev 09)
00:1f.1 IDE interface: Intel Corporation: Unknown device 269e (rev 09)
01:00.0 PCI bridge: Intel Corporation: Unknown device 0370
01:00.2 PCI bridge: Intel Corporation: Unknown device 0372
02:0e.0 RAID bus controller: Dell Computer Corporation PowerEdge Expandable RAID Controller 5
04:00.0 PCI bridge: ServerWorks: Unknown device 0103 (rev c3)
05:00.0 Ethernet controller: Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T (rev 12)
06:00.0 PCI bridge: Intel Corporation: Unknown device 3500 (rev 01)
06:00.3 PCI bridge: Intel Corporation: Unknown device 350c (rev 01)
07:00.0 PCI bridge: Intel Corporation: Unknown device 3510 (rev 01)
07:01.0 PCI bridge: Intel Corporation: Unknown device 3514 (rev 01)
08:00.0 PCI bridge: ServerWorks: Unknown device 0103 (rev c3)
09:00.0 Ethernet controller: Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T (rev 12)
0a:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
0a:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
10:0d.0 VGA compatible controller: ATI Technologies Inc: Unknown device 515e (rev 02)
/var/log/vmkernel
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.137 cpu0:1024)Loading module bnx2 ...
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.139 cpu0:1024)Mod: 936: Starting load for module: bnx2 R/O length: 0x10000 R/W length: 0x17000 Md5sum: 2ae8002ff9bf7f86abe01d2f66b0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Mod: 1373: Module bnx2: initFunc: 0x8aaf98 text: 0x8a1000 data: 0x28c23c0 bss: 0x28d6260 (writeable align 32)
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Mod: 1389: modLoaderHeap avail before: 7809080
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Initial heap size : 102400, max heap size: 4194304
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: driver bnx2 is looking for devices
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Trying 00:08.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Trying 02:0e.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Trying 05:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Announcing 05:00.0
May 8 10:47:48 XXXX vmkernel: <6>Broadcom NetXtreme II Gigabit Ethernet Driver bnx2 v1.5.10b (May 1, 2007)
May 8 10:47:48 XXXX vmkernel: <6>vmnic0: Broadcom NetXtreme II BCM5708 1000Base-T (B2) PCI-X 64-bit 133MHz found at mem da000000, IRQ 113, node addr 001aa023e81b
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: driver bnx2 claimed device 05:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Registering network device 05:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Uplink: 2082: Couldn't find vmnic0. Creating a new node
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Uplink: 3477: Connecting device vmnic0 to pps
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Uplink: 3622: Device vmnic0 yet to come up
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)LinPCI: 202: Device 5:0 claimed.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)Mod: 2530: called already for this device.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Trying 09:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.247 cpu1:1039)PCI: Announcing 09:00.0
May 8 10:47:48 XXXX vmkernel: <6>vmnic1: Broadcom NetXtreme II BCM5708 1000Base-T (B2) PCI-X 64-bit 133MHz found at mem d6000000, IRQ 105, node addr 001aa023e81d
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: driver bnx2 claimed device 09:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: Registering network device 09:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)Uplink: 2082: Couldn't find vmnic1. Creating a new node
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)Uplink: 3477: Connecting device vmnic1 to pps
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)Uplink: 3622: Device vmnic1 yet to come up
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)LinPCI: 202: Device 9:0 claimed.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)Mod: 2530: called already for this device.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: Trying 0a:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: Announcing 0a:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: Trying 0a:00.1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: Announcing 0a:00.1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)PCI: driver bnx2 claimed 2 devices
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.248 cpu1:1039)IDT: 1337: 0x71 <vmnic0> sharable (entropy source), flags 0x10
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.348 cpu1:1039)Uplink: 2491: Setting capabilities 0x0 for device vmnic0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.348 cpu1:1039)NetNCP: 1818: Opening discovery port
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.348 cpu1:1039)NetDiscover: 946: Using port 0x3's output chain
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.349 cpu1:1039)IDT: 1337: 0x69 <vmnic1> sharable (entropy source), flags 0x10
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.448 cpu1:1039)Uplink: 2491: Setting capabilities 0x0 for device vmnic1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.448 cpu1:1039)NetNCP: 1818: Opening discovery port
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.448 cpu1:1039)NetDiscover: 946: Using port 0x3's output chain
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.448 cpu1:1039)Mod: 1436: Initialization for bnx2 succeeded with module ID 4.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.448 cpu1:1039)bnx2 loaded successfully.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.527 cpu0:1024)Loading module e1000 ...
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.529 cpu0:1024)Mod: 936: Starting load for module: e1000 R/O length: 0x1c000 R/W length: 0x6000 Md5sum: 5ed42f1fc7747a048cda5914ac0d
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)Mod: 1373: Module e1000: initFunc: 0x8b1624 text: 0x8b1000 data: 0x28d93e0 bss: 0x28d9c60 (writeable align 32)
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)Mod: 1389: modLoaderHeap avail before: 7809064
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)Initial heap size : 102400, max heap size: 4194304
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)<6>Intel(R) PRO/1000 Network Driver - version 7.3.15
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)<6>Copyright (c) 1999-2006 Intel Corporation.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: driver e1000 is looking for devices
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: Trying 00:08.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: Trying 02:0e.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: Trying 05:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: Trying 09:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: Trying 0a:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)PCI: Announcing 0a:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.673 cpu1:1039)<7>PCI: Setting latency timer of device 0a:00.0 to 64
May 8 10:47:48 XXXX vmkernel: <6>e1000: 0a:00.0: e1000_probe: (PCI Express:2.5Gb/s:Width x4) 00:15:17:36:4b:3e
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)<6>e1000: vmnic2: e1000_probe: Intel(R) PRO/1000 Network Connection
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)PCI: driver e1000 claimed device 0a:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)PCI: Registering network device 0a:00.0
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)Uplink: 2082: Couldn't find vmnic2. Creating a new node
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)Uplink: 3477: Connecting device vmnic2 to pps
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)Uplink: 3622: Device vmnic2 yet to come up
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)LinPCI: 202: Device a:0 claimed.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)Mod: 2530: called already for this device.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)PCI: Trying 0a:00.1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)PCI: Announcing 0a:00.1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.762 cpu1:1039)<7>PCI: Setting latency timer of device 0a:00.1 to 64
May 8 10:47:48 XXXX vmkernel: <6>e1000: 0a:00.1: e1000_probe: (PCI Express:2.5Gb/s:Width x4) 00:15:17:36:4b:3f
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)<6>e1000: vmnic3: e1000_probe: Intel(R) PRO/1000 Network Connection
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)PCI: driver e1000 claimed device 0a:00.1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)PCI: Registering network device 0a:00.1
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)Uplink: 2082: Couldn't find vmnic3. Creating a new node
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)Uplink: 3477: Connecting device vmnic3 to pps
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)Uplink: 3622: Device vmnic3 yet to come up
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)LinPCI: 202: Device a:1 claimed.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)Mod: 2530: called already for this device.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.851 cpu1:1039)PCI: driver e1000 claimed 2 devices
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.852 cpu1:1039)IDT: 1337: 0x79 <vmnic2> sharable (entropy source), flags 0x10
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.852 cpu1:1039)Uplink: 2491: Setting capabilities 0x0 for device vmnic2
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.852 cpu1:1039)NetNCP: 1818: Opening discovery port
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.852 cpu1:1039)NetDiscover: 946: Using port 0x3's output chain
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.852 cpu1:1039)IDT: 1337: 0x81 <vmnic3> sharable (entropy source), flags 0x10
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.853 cpu1:1039)Uplink: 2491: Setting capabilities 0x0 for device vmnic3
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.853 cpu1:1039)NetNCP: 1818: Opening discovery port
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.853 cpu1:1039)NetDiscover: 946: Using port 0x3's output chain
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.853 cpu1:1039)Mod: 1436: Initialization for e1000 succeeded with module ID 5.
May 8 10:47:48 XXXX vmkernel: 0:00:00:04.853 cpu1:1039)e1000 loaded successfully.
Hello,
Please run the command:
esxcfg-nics -l
This will show the current speed the adapter things it is. This is the important bits.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
root@XXX log# esxcfg-nics -l
Name PCI Driver Link Speed Duplex MTU Description
vmnic1 09:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
vmnic2 0a:00.00 e1000 Up 10Mbps Full 1500 Intel Corporation 82571EB Gigabit Ethernet Controller
vmnic3 0a:00.01 e1000 Up 10Mbps Full 1500 Intel Corporation 82571EB Gigabit Ethernet Controller
vmnic0 05:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T
It reads better, but still the 10Mbps... Any hints?
Thanks!
Cheers,
Harry
Your NICs are autonegotiating down to 10MB. Have you checked your switch ports to force them up to 1000/Full. You can also turn autoneg off on the esx host side and force them to 1000 Full.
-KjB
The IEEE standard is that for gig links, autonegotiation is enabled and the spped is not to be forced
Clause 40 (1000BASE-T), subclause 40.5.1 of 802.3 states:
All 1000BASE-T PHYs shall provide support for Auto-Negotiation
(Clause 28) and shall be capable of operating as MASTER or SLAVE.
Auto-Negotiation is performed as part of the initial set-up of the link, and
allows the PHYs at each end to advertise their capabilities (speed, PHY
type, half or full duplex) and to automatically select the operating mode
for communication on the link. Auto-negotiation signaling is used for the
following two primary purposes for 1000BASE-T:
a) To negotiate that the PHY is capable of supporting 1000BASE-T half
duplex or full duplex transmission.
b) To determine the MASTER-SLAVE relationship between the PHYs at
each end of the link. 1000BASE-T MASTER PHY is c from a local source.
The SLAVE PHY uses loop timing where the clock is recovered from the
received data stream.
What this means is that although autonegotiation (Clauses 22 and 28) is optional for
most variants of Ethernet and manual configuration (forced mode) is allowed, this is
not the case for Gigabit copper (1000BASE-T). Per the IEEE 802.3u specification, it
not possible to manually configure one link partner for 100 Mbps full duplex and
still autonegotiate to full duplex with the other link partner. In all cases, both ends of the link must be set to the same value or the link may not connect or may result in duplex mismatch as shown in following tables.
The tabled below this paragrapgh (page 7/18 in adobe or page 5 on the document) clearly show the effects of not using autonegotiation.
Its actually amazing how many cisco CCIE's also believe you should be hard coding the speed of copper gig links, even when shown the documents from the IEEE and cisco that dispute that.
Old habits die hard I guess.
Hi,
I have used that card and I know it normally works without issues. So you need to look at the normal possibilities.
1) Switch ports set to auto? (Auto is the correct setting on the switch)
2) Are the cables cat5e?
3) Are the cables too close to a RF noise source?
4) Disconnect and reconnect the physical cables to trigger a re-negotiation.
If they all check out then try these commands to set the host NIC at 1000Full
esxcfg-nics -s 1000 -d full vmnic2
esxcfg-nics -s 1000 -d full vmnic3
Interesting note Rumple, I did not know that tid bit. Thanks!
Unfortunately the info still does get this one to work.
ps. What model of switch are you using, some low end ones do not negotiate very well.
Oh! One more thing, this one sometimes happens (And yes I have done this)
5) Are the cables connected to where you think they should go or where the actually go?
Standards are good on paper, but in practice, you have to use what works. While autoneg works more often than when it does not, different vendors have different algorithms for autoneg. You'll find that some cards will trade autoneg packets at different rates, and can negotiate at different rates and duplexes. While this should not make a difference, it does, and the point was to force the switch port and the NIC, to see if it will connect at that setting.
We've had at many different times run into issues where autoneg, has autonegotiated to 1000/Half, or 100/Full. Why? Well, because autoneg thought that was the highest state that could be negotiated. Again, you force the switch and force the port, and everything works just fine. So, in a network where you want to be sure you are running at a certain speed and duplex, then set it that way, or don't, and most of the time, you will autoneg to 1000/Full on a gig network, and sometimes, you will not.
There's a reason the option to force a speed exists, and is an exception, for most of the time autoneg will work, but an exception is not an exception if the same thing would work in every scenario.
Just my $0.02.
-KjB
Message was edited by: kjb007: changed cars to cards : )
The advantage with Gig is that it either will negotiate at 1000 or it will not. there is not a half duplex setting for a gig link...
I'm not sure what you mean about the first part. Are you saying that a gig link will either negotiate to 1000, or it will not negotiate to anything? That's certainly incorrect. Maybe it's just me here, but I've seen more than one occassion where a Gig link has been at 1000/Half, without being forced, and it can certainly be forced to half duplex as well.
In this specific case, however, if the link was not able to negotiate higher, then it has negotiated down to 10 Mbps. Force the link on the NIC and the switch to be 1000/Full, and see if it's simply a problem with autoneg here.
-KjB
Actually I think the specification does support full/half duplex mode, but I know on the cisco gear if you specify a duplex setting within the config it apparently ignores it since you'd only want to run gig in full mode anyhow. In reality I am pretty amazed they even put a duplex mode into the standard. A coupel docs I was reading a while ago stated that none of the large vendors the article writers talked to were even putting half duplex mode into their gig chipsets...
Hello all,
Thanks for your input, so far i did not get it to work.
The switch is a HP 2900 procurve 50 ports. I replaced the cable with a clear view from nic to switch and still no improvment.
I used the switch interface to put into auto-1000 mode. Resulted in link lost on the esx server.
Forcing the nics with the esx tool to 1000 full duplex did not work.
Any suggestions what to try next?
Cheers,
Harry
root@XXXX log# ethtool vmnic2
Settings for vmnic2:
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: Unknown! (65535)
Duplex: Unknown! (255)
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: no
currenlty speed forced on switch and on the esx, strange values for speed and duplex.
root@XXXX log# ethtool vmnic2
Settings for vmnic2:
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: Unknown! (65535)
Duplex: Unknown! (255)
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: g
Current message level: 0x00000007 (7)
Link detected: no
Only switch forced
root@XXXX log# ethtool -i vmnic2
driver: e1000
version: 7.3.15
firmware-version: 5.6-2
bus-info: 0a:00.0
driver: e1000
version: 7.3.15
firmware-version: 5.6-2
bus-info: 0a:00.1
Set your switch to 1000-Full, auto-1000 will still negotiate, and negotiate may not be working correctly, for some reason.
Then, use esxcfg-nics to set your nic's to 1000 full as Mike showed earlier. That way, no negotiation is taking place.
-KjB
I have the e1000 Intel card working on the 2900 series at the office with auto-auto. There are some strange driver behaviors with the e1000 series.
I recall having an issue like this one and after I set the switch to auto and set the card to defaults and then restarted the ESX server it would negotiate correctly.
It seems to only negotiate correctly on the initial modprobe and after that will stick to what it detected on the initial probe.
So it may be due to having the switch not set to auto on the initial modprobe event that creates this state.
Hi all,
The switch only allows me to put into auto-1000 or no auto with 100FDX etc..
Forcing both the switch and network card doesn't work. Forcing one of both also doesn't work.
I'm running out of options it seems, anybody any tips? Should i search in the switch for the problem our the network card? Still strange that the other server work okay with the same kind of cards on the same switch.
Cheers,
Harry
I would also try swapping the cable with a known working NIC to a known bad NIC, and see if the problem shifts along with the cables. Then you can see if your NICs are an issue.
-KjB
Hello,
One thing to consider on top of all this is the firmware levels in the pNICS. I would verify that you have the proper firmware levels, compare against the working machine. The other item I would check is for IRQ overlaps. Once more compare against the working machine. Also, some systems share IRQs between remote management hardware and pNICS, be aware of that as well.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354, As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
I think it would be good to isolate the possibilities.
1) Replace the card.
2) Try connecting to a differnt switch.
Hello all,
Many thanks for your supported. Last week i tried a lot of solutions.
1. Tried an other switch ( still a hp one)
2. Tried a cross cable with one machine
3. Install win2k3 with intel network tools, did a full diagnose run.. Everything ok is said.
4. I double checked but the HP switch only support auto-1000FDX and not 1000FDX mode.
Still everything 10mbit..
It's a dual port nic so i connected the cable form port 1 to port 2. I got 1gb! Funny but ussless.
After this all failed we put in a different nic and that work straight a way.
I'm sending the nic back to the supplier for RMA, and stop wasting my time.
Thanks for your help.
Kind Regards,
Harry
Your Welcome
Awarding points is a great way to say thanks.
I know this subject is a bit old but the conclusion that I found might help others.
I had the same problem where Windows was showing 10 Mbps within the ESXI environment. The problem, in my case, was that the VMware tools installation has been started but not completed. The status in the summary page was: VMware tools "Not installed". After completing the installation, the problem has been fixed by itself and now Windows XP is showing 1 Gbps.