VMware Cloud Community
philled99
Contributor
Contributor

NICs running at 100Mbps not Gigabit

I recently upgraded my 100Mbps switch to a Gigabit switch. However, I noticed that the ESXi NICs are still running at 100 Mbps, not 1000Mbps. So using vSphere Client I changed the speed to 1000Mbps, and oh boy what a disaster that was. That seemed to trash the network connection which meant everything stopped working and I could no longer connect to the ESXi host though vSphere Client. And because I couldn't connect with vSphere Client I couldn't change the speed back to 100Mbps. So I ended up going into the ESXi console and choosing Reset from the menu which essentially means starting again with a blank ESXi config. This was of course very frustrating.

So my questions are:

1) Why might the NICs be running at only 100Mbps even though they're now connected to a Gigabit switch?

2) How do I get them running at Gigabit without losing network connectivity?

3) If I do run into the same problem again, how can I reset the NIC without having to do a complete ESXi reset?

Tags (2)
0 Kudos
5 Replies
Phokay
Enthusiast
Enthusiast

In most cases, you should set ESX's NIC speed/duplex settings to "Auto" and also the switch ports settings to "Auto". This will fix the speeds to 1Gbps quite often. But sometimes, you might have to unplugged the cable and re-plug it back to the switch port or the NIC side or in rare cases, you may have to reboot the ESX server for a permanent fix.

Gr33nEye
Enthusiast
Enthusiast

you can change the speed of and duplex of the nics via console

esxcfg-nics -s "speed here" -d "duplex(optional)" vmnicX

brandonclone1
Contributor
Contributor

I experienced this issue last night in my homelab on an R710 with a Netxtreme II gigabit NIC. I can change the vNIC speed to anything except 1000mbps and it will work, but as soon as I choose 1000mbps it appears to crash the Broadcom driver (In ESXi 6.0 this is bnx2). ESXi is still running locally after the crash, so when I look at the networking settings the interface shows the vNIC as disconnected. I have to manually reboot the server for the vNIC to change back to Auto Negotiate. Restarting the management network or vSwitch does not put it back to Auto Negotiate.

I tried swapping 5e cables, tried all 4 ports on my gigabit NIC (same issue), and tested the gigabit switch with a laptop (works fine). I need to do some troubleshooting, make sure ESXi 6.0 is using recent bnx drivers and that my Netxtreme is up to date. Other than that I'm at a loss.

0 Kudos
Dave_the_Wave
Hot Shot
Hot Shot

If you be better if you mentioned your exact hardware, the nics, and the switch?

If a Physical Adapter can't be easily detected as 1000 by default-auto, then something is usually causing it to stay down at 100.

In order of most likely cause to least likely:

-Have yet to do a cold reboot of all hardware

-not up-to-date drivers

-Desktop grade physical adapters

-Home-grade ethernet hubs

-old patch cables and/or old ports in the walls.

For example, I've always used Hp ProLiant servers on Hp ProCurve switches, and I never had to do anything to get them at 1000, they simply are by default.

0 Kudos
brandonclone1
Contributor
Contributor

Turns out, my server was on a 10/100 switch. D'oh! I moved it over to an HP 24G switch and immediately the ESXi vNICs updated to 1000mbps.

Lesson learned - always start with the most basic troubleshooting. When the first issue occured (crashing vNIC after attempting to force 1000mbps) I started down the rabbit hole looking at drivers, firmware updates, etc. when I should have looked at the physical topology first.

0 Kudos