VMware Cloud Community
pirx666
Contributor
Contributor

Which HPE 10/25Gb Adapter, Broadcom or Mellanox?

Hi,

we have to switch from 10GbE to 25GbE soon. For 10GbE I used NIC's with Intel chipset before, for 25GbE we only can choose between Broadcaom or Mellanox.

- 631SFP28 (Broadcom BCM57414)

- 640FLR (Mellanox ConnectX-4 Lx)

HPE Ethernet 10/25Gb Adapters (QuickSpecs/a00047733enw.pdf)

Any recommendations, any known nasty issues with the one or the other? Both are on the compatibility matrix and supported for vSAN.

Reply
0 Kudos
6 Replies
dbalcaraz
Expert
Expert

Pick whatever you feel is more "reliable".

In my case I have a good feeling with Broadcom (10GbE) but, this is up to you.

-------------------------------------------------------- "I greet each challenge with expectation"
Reply
0 Kudos
pirx666
Contributor
Contributor

I had some real nightmares with network cards / chipsets in the past. Feeling is nice but doesn't help much. Thus I wanted to learn from the experience others already have made. Not much as it seems. I'll go with the 640SFP28 with Mellanox chip, this is what HPE uses in their vSAN Ready Nodes, I hope there is a reason they chose this card.

Reply
0 Kudos
a_p_
Leadership
Leadership

I don't have real world experience (yet), but I think one reason for the Mellanox adapter is it's RoCE/RDMA capability, which however also requires special physical switch settings.

see e.g. http://www.mellanox.com/related-docs/solutions/SB_vSAN%20VMware.pdf

André

Reply
0 Kudos
IRIX201110141
Champion
Champion

Cant speak about HPE but we use DellEMC BCM57414 for more  than a year now. They are ok and from ESXi perspective we havent seen any issue.

But...

- 25G Networking is different compared to 10G

- Various auto-negotiation standard exists with a dozen of settings in the NIC FW

- For onboard (rNDC) BCM57414 we need different optics as for PCI BCM57414

- The BCM57414  supports RDMA as well and NPAR(havent use it yet)

- DAC is possible.... no DAC Support for some of the other 25G nic options

Regards

Joerg

Reply
0 Kudos
pirx666
Contributor
Contributor

In the meantime I had to find out the hard way that I've to do more research and testing before hanging a network adapter. In our new DL380Gen10 we chose 562SFP adapters (Intel X710) for 10Gb, not 25Gb , because the 560SFP we used before was reliable. I was wrong. We had to replace all adapters with a different NIC (Mellanox). A 5min search with google would have me saved 2 weeks of escalation.

Intel X710 nic woes

Reply
0 Kudos
adgate
Enthusiast
Enthusiast

Vote for Mellanox taking into account this article -
"HPE and Mellanox recently published a Solution Brief highlighting their cloud-ready OpenNFV (Network Functions Virtualization) solution which demonstrates record DPDK performance and OVS acceleration using Mellanox ASAP2 (Accelerated Switching and Packet Processing). The results were based on HPE ProLiant Gen10 Servers and Mellanox ConnectX-5 Adapters and Spectrum Switches. "
http://www.mellanox.com/blog/2019/02/hpe-and-mellanox-offer-cloud-ready-opennfv-solutions-with-recor... 

Reply
0 Kudos