VMware Cloud Community
alize77
Contributor
Contributor

Incorrect NIC speed!

On the VI Client, one of my nic is showing as 100mbps. However, on the Guest VM, it show as 1000mbps. I have check the speed using esxcfg nics -l and indeed, the speed is running at 100mbps. I'm totally clueless as I have check the switch port settings and it's set to auto and on the vmware, it is also running as auto. If I force the speed to be running at 1000mbps on the vmnic, the status will be shown as "Down".

I have change the network cable and switching to another port but still, the speed is running at 100mbps. Where else could go run??? Have anyone experience such issue before? Pls advise and thanks.

Reply
0 Kudos
17 Replies
alanrenouf
VMware Employee
VMware Employee

So is the physical switch that your ESX server is plugged into a 1000mbps switch, have you checked the port on the switch end and tried forcing it at 1000mbps ?

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Blog: http://virtu-al.net Twitter: http://twitter.com/alanrenouf Co-author of the PowerCLI Book: http://powerclibook.com
Reply
0 Kudos
Yattong
Expert
Expert

Hey

The Guest vm will show 1000mbs as it is attached to the 'virtual switch' which will set to max possible.

The physical nic, which you see thorugh vi client is the uplink form the virutal switch to the physical switch. These are set to auto/auto which is fine.

Doesnt seem like a problem to me.

Are you experiencing network problems?

Read up on the networking concepts, and you will understand better.

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points ~y
Reply
0 Kudos
alize77
Contributor
Contributor

I have already check the switch configuration and it's set to auto and support 1000mbps. If I force it to 1000mbps on both the physical switch or on the vmnic, the network adapter status will become "Down".

On the ViClient or ESX consol, the speed will show as 100mbps whereas the VM guest is show as 100mbps. I have 8 nic ports on the server and only 1 of them is reflected as 100mbps. I even tried changing the cable and switching to another physical port on the switch and still, it reflect as 100mbps which is really strange to me!!!

Reply
0 Kudos
weinstein5
Immortal
Immortal

Interestiing it does sound like with your trouble shooting you have eliminated the physical switch as the problem - are the physical nics identical on the ESX host? - in terms of the vm - as someone else mentioned it will show always show 1000mbps - because the vmkernel virtualizes the network and creates the virtual network card which is recognized as a GigE network card -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
alize77
Contributor
Contributor

Hi Weinstein, I'm don't quite understand the question raise on "are the physical nics identical on the ESX host? " Can you kindly elaborate? Eventhough the VI Client show the speed as 100mbps, the vm will detect the nic as 1000mbps. I'm at the verge of giving up as this is the 1st time I encounter such a strange issue without any clue where the fault lies.....!

Reply
0 Kudos
jhanekom
Virtuoso
Virtuoso

Hi

To rephrase what previous posters have said: The network link speed inside a virtual machine is completely unrelated to what is available at a physical level. You could have a 10Mbps link on the physical side, and your VM would still show 1Gbps. The guest has a virtual network card with no real physical characteristics - the speed settings are just to fool the operating system into thinking it's talking to a real adapter.

Is the adapter you're seeing in the VI Client that is running at 100Mbps set to autonegotiate? What happens if you set this adapter to 1000Mbps? (For that matter, is it possible to set this adapter to 1000Mbps at all?)

Also, if you can run the following command from your Service Console and post the output here, it would be most useful: esxcfg-nics -l

Reply
0 Kudos
alize77
Contributor
Contributor

Yes, I have set it to autonegotiate. When I force it to 1000mbps using esxcfg-nics -s 1000, it causes the nic to be Down.

Name PCI Driver Link Speed Duplex MTU Description

vmnic0 03:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T

vmnic2 08:00.00 e1000 Up 1000Mbps Full 1500 Intel Corporation 82571EB Gigabit Ethernet Controller

vmnic6 0c:00.00 e1000 Up 1000Mbps Full 1500 Intel Corporation 82571EB Gigabit Ethernet Controller

vmnic5 0a:00.01 e1000 Up 1000Mbps Full 9000 Intel Corporation 82571EB Gigabit Ethernet Controller

vmnic7 0c:00.01 e1000 Up 1000Mbps Full 9000 Intel Corporation 82571EB Gigabit Ethernet Controller

vmnic3 08:00.01 e1000 Up 1000Mbps Full 1500 Intel Corporation 82571EB Gigabit Ethernet Controller

vmnic1 07:00.00 bnx2 Up 1000Mbps Full 1500 Broadcom Corporation Broadcom NetXtreme II BCM5708 1000Base-T

vmnic4 0a:00.00 e1000 Up 100Mbps Full 1500 Intel Corporation 82571EB Gigabit Ethernet Controller

Reply
0 Kudos
weinstein5
Immortal
Immortal

Have you tried on the physical switch end forcing the port to 1000 whicle leaving the physical port to Auto?

IN terms of the Vm remember, the guest O/S does not see the physical NIC all it sees is the cirtual NIC which is identified as gigabit nic by the guest o/s - so theoretically if you have a 10 mbit physical nic the fuest o/s would still say the virtual nic is a gigabit -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
Brock_B
Contributor
Contributor

I have this same issue going on. Has anyone else ran into this? This is on an Intel quad port NIC. One NIC will only connecet at 100/full while the rest connect at 1000/full. I had to remove this NIC from my port group, and virtual switch due to this error. So basically its useless to me. Would be nice to have the issue resolved.

Reply
0 Kudos
johu
Contributor
Contributor

This seems to be fairly common problem with Intel Pro/1000 copper adapters and ESX regardless cables and switch manufacturer. I've seen it with Intel 82571EB dual-port on Fujitsu-Siemens RX300S3 and BX620S3 when connected to Nortel BS5510 and BS5520. Replacing switches, network cards or cabling doesn't help. Problem can be solved by configuring switch to advertise only 1000 Full as valid speed while still leaving autonegotiation enabled. Default advertised speeds are usually 10 Half, 10 Full, 100 Half, 100 Full and 1000 Full which causes servers with less than 2 meter patch cables to negotiate 10 Full and those with longer cabling 100 Full. Attempts to solve this by forcing 1000 Full simply cause link-down situation. Same hardware with integrated Broadcom BNX2 works without problems. Also running Windows on same hardware fixes problem so it appears to be something with Linux/ESX e1000 driver itself present at least with 3.5.0 110268 and older ESX builds.

Reply
0 Kudos
RParker
Immortal
Immortal

> Yes, I have set it to autonegotiate. When I force it to 1000mbps using esxcfg-nics -s 1000, it causes the nic to be Down.

The question is if you take the LAN cable plugged into that port on ESX, and follow it ALL the way back to the physical port on the physical switch are you SURE it supports 1000mb speed? I don't think it does, which is why autonegotiate only gives you 100Mbs speed, and when you try to force it the NIC drops. You don't have a 1000Mb (Gig) Link! That's the problem. Your switch or that port in the switch does NOT support Gig speeds.

Reply
0 Kudos
plarsen
Enthusiast
Enthusiast

We had a similar problem here. We tried several things, including ordering new NIC cards but that didn't help. In our case, it was dual nic-cardS (more than one) where one port was negotiated at 1000 and the other at 100 - going to the same switch, settings etc.

The solution was replacing the cables. Get Cat5e or Cat6 cables. Make sure they're properly insolated and that there's no loose connections. It solved our issue - after a long period of frustration.

Reply
0 Kudos
macneej
Enthusiast
Enthusiast

I have the same error on a broadcom BCM5708 card. It is plugged into a 1000GB switch and when either side is hardset to 1GB the port goes down. I will try changing the cables and see if it comes back up. However I am surprised that both ports on the card are down.

Reply
0 Kudos
raytracy
Contributor
Contributor

I have similar probelm with ESX 4.0:

Intel Pro/1000 CT (82574L) detected as 100M vmnic0, force it to 1000M will block all ESX network traffic. But after reboot ESX with this setting, VI client configuration panel shown (Speed 100 Full) and (Configured 1000 Full) with vmnic0, and all VM traffic are running.

BTW, this setting also improved iSCSI performance at least twice or triple times of disk access speed. Although esxcfg-nics -l still shown 100M full setting.

I will appreciate if someone would point me a usefull link to solve this problem.

Reply
0 Kudos
kcampbell
Contributor
Contributor

i've got the same issue w/3.5u4. everything verified at 1000mb -confirmed, trust me. least downtime would be swapping out cables. forcing 1000mb takes the nic down. 10 other esxs with same specs and consistent with switch and nic layout & model do not have this problem -all along same redundancy routes and on nothing but 1000mb switches...just one esx two nics reporting at 100mb, unfortunately for me --within our cluster both our dms and exchange svr sits on this host -cant even migrate due to speed not fully supported w/vmotion and we're experiencing slight performance degradation ... slight to me, major to others -few mins as opposed to few secs etc. i do not think the card(s) are the prob ... trying cables and then placing support call to POSSIBLY get some further insight...only other thing i can think of would be firmware update on the controller for this host, same controller types across the scope but doesnt answer the question of why this one and where this derives from bec 4 others are w/o updated firmware and they're just fine. im not replacing cards........

yet

Reply
0 Kudos
kcampbell
Contributor
Contributor

as a follow up. i swapped out the 2 in-use cat6 for the lan connections with two others, one of the connections i connected to another port on the patch panel and then back to the original port- THAT nic came up 1000mb -the other is in the wall and i changed out the cable -still comes up at 100mb. the internal wall cabling is cat6 and speed confirmed at 1000mb. thinking this is more of a firmware issue. so now that ive got one channel at 1000, i can migrate both integral svrs from that host onto another and then take host down to maintenance mode and update the firmware. then im almost positive all nics will come up with correct speed bec it's not a card issue, given the circumstances and other ppl experiencing the same thing with varying cards (at least i feel confident it isnt a card issue for my situation). im just happy for the small victory with getting one of the 100mb to 1000mb with a cable swap so i can evacuate the svrs to another host....small victories

Reply
0 Kudos
egunson
Contributor
Contributor

This answer helped solve my issue today!

Onboard Broadcom NetXtreme II BCM5709 1000Base-T NIC get 1000Mbps FD right across 4 ports.

However the Intel PRO/1000 PT Low Profile Quad Port PCIe Gigabit Ethernet Controller gets 2 10Mbps and 2 100Mbps, all FD.

Followed the fix, and now the Intel ports get 1000Mbps FD.

Thanks.

Reply
0 Kudos