VMware Cloud Community
Alexey_78
Enthusiast
Enthusiast

Broadcom BCM57412 quad port can't get working all ports

Hello,

I have two del servers with fresh 6.7 vSphere installed

Hypervisor:  VMware ESXi, 6.7.0, 11675023   

Model:  PowerEdge R640

All NICs connected to the same switch. No port specific configuration on switch.
NIC:

vmnic0  0000:19:00.0 bnxtnet Up   10000Mbps  Full   b0:26:28:5b:78:a4 1500   Broadcom Limited BCM57412 NetXtreme-E 10Gb RDMA Ethernet Controller
vmnic1  0000:19:00.1 bnxtnet Up   10000Mbps  Full   b0:26:28:5b:78:a5 1500   Broadcom Limited BCM57412 NetXtreme-E 10Gb RDMA Ethernet Controller
vmnic2  0000:01:00.0 ntg3    Up   1000Mbps   Full   b0:26:28:5b:78:a2 1500   Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet
vmnic3  0000:01:00.1 ntg3    Down 0Mbps  Half   b0:26:28:5b:78:a3 1500   Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet
vmnic4  0000:3b:00.0 bnxtnet Up   10000Mbps  Full   b0:26:28:5f:86:10 1500   Broadcom Limited BCM57412 NetXtreme-E 10Gb RDMA Ethernet Controller
vmnic5  0000:3b:00.1 bnxtnet Up   10000Mbps  Full   b0:26:28:5f:86:11 1500   Broadcom Limited BCM57412 NetXtreme-E 10Gb RDMA Ethernet Controller

On both can not make usable nic0 and nic1. On same quad port card both nic4/5 works fine.

For my tests I've just created network switch and vm on each server and attaching physical interfaces from nic0-5 to it. VM network attached to the same switch.

So when nic4/5 are attached I can run ping between VM without issue. When any 0 or 1 used no ping or arp traffic on VM. Just silence.

Any help wil be appricated

9 Replies
daphnissov
Immortal
Immortal

Did you use the DellEMC-customized ISO of ESXi? If not, that may be your mistake and you should re-install using that image.

0 Kudos
Alexey_78
Enthusiast
Enthusiast

I think that usual one iso was used.

I've got update from my hardware guy there.

It appears that there is no one quad port card but two.

Build network hardware.

articel number 540-BBUL

Broadcom 57412 two Ports 10Gbit/s SFP+ + 5720 two Ports 1Gbit/s Base-T,

rNDC

+

Manually inserted card

Broadcom 57412 two Ports 10 Gbit/s, SFP+, PCIe Adapter, Low Profile

In ESX UI as saw 57412 4 ports that is why thought that is 4 port card.

So that manually inserted card works fine. But build-in having issues.

0 Kudos
daphnissov
Immortal
Immortal

So which adapter is not showing vmnics within ESXi? Also, you need to check and be positive you used the correct image. Don't guess here.

0 Kudos
Alexey_78
Enthusiast
Enthusiast

All NICs are shown but nothing works when vmnic0/1 attached to the vswitch.
Al links are physically connected to same switch. No ports restrictions on switch.

Same issue with two identical Dell servers equipped in same way.

I did tried to install esx 6.7 customized by Dell iso.

Also tried to install official 6.5.

Same issue.

No traffic thru vmnic0/1.

Any ideas?

Don't want to return servers because of that crap.

0 Kudos
daphnissov
Immortal
Immortal

If I were you, I'd open an SR with VMware and have them look into it before you do anything rash like return a whole server.

0 Kudos
iPupp
Contributor
Contributor

Hi,
May I'm got the same issue, same network card, OS (vmware), problem is traffic not pass but the link show connected.

I'v tried to update network card firmware via iDRAC  the problem has been solved.

Let try.

1563776976624.jpg

lizhiwei066sz
Contributor
Contributor

Recently, I faced same problem with Broadcom BCM57412 Dual 10Gb SFP+ card after a replacement for BCM57412 hardware breakfix.

Symptom:  due to a hardware issue of Broadcom BCM57412 Dual 10Gb SFP+  PCI-E card, Dell replaced a new card for us. After replacement, we faced the connection issue. Even though from iDRAC console and vcenter, it shows this card already connected and two ports also connected to network switch, all VMs cannot be reached via Ping and no outbound traffic from VMs. We verified cabling and switch configurations but it is correct.

vSphere ESXi version: 6.5 (actually didn't do patch regularlly)

Solution: we check another R740 server which also has the same PCI-E card installed and the firmware version is newer, which starts 21.60.xx.xx. The BCM57412 firmware version of the server which has problem is 21.40.xx.xx. I tried to search the firmware from Dell support website and download a newer version Network_Firmware_YK81Y_WN64_21.60.22.11_03.EXE. Then use iDRAC portal to do firmware upgrade. After reboot, test again by ping and all VMs can be pingable again.

So I guess this problem was due to legacy firmware issue and we also need to plan regular firmware/patch update for hosts.

Also thank experts here and their contributions of useful information.

long940216
Contributor
Contributor

Which page did you take the screenshot from?Can you paste the url?

0 Kudos
bluefirestorm
Champion
Champion

long940216,

It is quite obvious the screenshot looks like a Dell website style. The talk about iDRAC all but confirms it.

https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=6dr4k

In the future, it is better to start your own thread rather than posting a thread that has been inactive for months, in this case years.

0 Kudos