VMware Cloud Community
Ivan_Drago
Contributor
Contributor

Which 10GB card to use with ESXi

Guys,

I am testing out ESXi 4.1 with 10gb NICs.  A whole bunch of them are supported by VMware.  But which one would be better or more relieable.  I am using HP DL380 g7 servers.

Any input would be appreciated.

Thanks.

Reply
0 Kudos
21 Replies
chriswahl
Virtuoso
Virtuoso

Not sure there is a right answer here, but I use QLogic QLE3242 cards and have not had any issues with them.

They are certified for 4.1u1.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
Troy_Clavell
Immortal
Immortal

Reply
0 Kudos
JohnADCO
Expert
Expert

My question is of course...

What is the cheapest card on the HCL that is known to work well, not have issues with a majority of 10GBE switches out there that would be supported in Dell 2950 and 2970 hosts?

Maybe a dual port even?   Ideally we need two ports for storage and one port for lan.,

Reply
0 Kudos
Ollfried
Enthusiast
Enthusiast

I installed two DL385 G7 with NC523SFP. Dual Port, no problems. I also installes NC522 in G6 servers, no problems, too.

Reply
0 Kudos
idle-jam
Immortal
Immortal

just remember you need end to end where your switches as well as our other servers are on 10GB else it's much better to save money on 1Gb or virtual connect ..

Reply
0 Kudos
markzz
Enthusiast
Enthusiast

Although this maybe something of a late responce. I have made the mistake of purchasing the NC523SFP 10Gb NIC's.

These have been nothing but trouble.

Although they have been in production use for over a year there have been many link loss issues with these cards.

We duel path all networks by unstalling 2 duel port cards, each card supplies one port to the 2 segments required, so until recently we have not experienced any issues which impacted production.  (both cards have not gone down at the same time "until recently")

As mentioned, we did see issues where both cards lost link for a few seconds unfortunatly at the same time just recently.

After reading considerable quantities of online gaff I came to the conclusion it's time to investigate ESXi 5 with the new firmware and drivers.

I would be lying if I said this had resolved anything. The issues are considerably worse. We don't see the Link Loss we once did but the cards from time to time simply refuse to transmit packets and therefore we can not longer see our NFS stores or guest servers.

Here's a HP article describing the issue.

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&taskId=110&prodSeriesId=...

This article also talks about the NC522SFP which is another qLogic card rebranded by HP. This card has in fact been very stable.

If you have not purchased 10Gb cards at this point I would suggest the NC552SFP or the NC522SFP.

The NC552SFP is an Emulex card and although I'm only testing one currently it appears to be stable.

The NC522SFP were a bit flaky early days but they have been fine for well over 3 years and appear to be stable with ESXi5. 

Reply
0 Kudos
markzz
Enthusiast
Enthusiast

Just a quick update.

The short answer is avoid anything which uses a qLogic chipset..

This has been the answer to our NIC' (both 10Gb and 1Gb) issues.

Reply
0 Kudos
mghdesign
Contributor
Contributor

Has anyone gained any experience with Emulex OCe11102-NT

http://www.emulex.com/products/10gbe-network-adapters-nic/emulex-branded/oce11102-nt/overview.html

I am not sure the difference between OCe11102-NT and OCe11102-IT

Would appreciate anyone saying if they have tried it (especially if not successful).

Thanks,
Mario

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

It's worth thinking about whether a card is compatible with your server in addition to VMware's HCL.

Particularly with an HP server such as that you mention, who are notorious for having cards magically not work and declaring "not on our HCL".

Therefore, you really have two HCLs to check here. The option list is actually quite small once doing so.

Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

If I never see another HP Qlogic 10g HBA again, it will be too soon.

Had to replace 14 dual port HBA's with Intel x520 series at a cost of 11k because the HP's fell over weekly (if we were lucky to have them run that long)

Reply
0 Kudos
markzz
Enthusiast
Enthusiast

You should be getting onto HP about this..

HP have been a great deal of help with our NC523SFP issues and have replaced all of our NC523SFP's with the Emulex equivalent which is the NC552SFP..

They have been working flawlessly now for about 2 months..

But I agree to some degree with you qLogic comment.. I'll take it a little further as we have had similar issues with the qLogic 1Gb cards.

If I never see another QLogic product it will be too soon.. These rubbish products have cost be about 150 hours of work in the past year.

OH lets not forget the impact to production systems or the down time or the loss of business data.

It reasonable this excercise would have a cost of a few hundred thousand $'s associated to it.

Good one QLogic..

Regarding the equivalent of a HP HCL.

Use the HP online configurator to built the server. They simply won't let you add hardware that is not compatible.

Once you have this hardare spec complete you can export it into excel. You then go to the VMWare HCL and check the server and the addin components for compatibility with VMWare..

It's very straight forward..

Reply
0 Kudos
markzz
Enthusiast
Enthusiast

It appears the OCe11102-NT is a 10Gb duel port that does not support iSCSI initiation and therefore can not be used to boot from..

OCe11102-IT does support the iSCSI initiator and therefore can be used to boot from..

Rather than go directly to Emulex you could look at the HP version of this.. It's actuall made by Emulex but it rebadged by HP and called the NC552SFP.

They have been brilliant..

I have no complaints, excellent performance and stability..

If your running HP servers these cards are compatible with the g6 and g7 servers. (they do work in the g5's but are not supported)

Also the HP version does not have a fan on the heat sync.. I believe this is an advantage.. Less fans to fail..

Reply
0 Kudos
cypherx
Hot Shot
Hot Shot

Chris Wahl wrote:

Not sure there is a right answer here, but I use QLogic QLE3242 cards and have not had any issues with them.

They are certified for 4.1u1.

Chris, are you still using the QLE3242 cards?

We were having trouble maintaining new NFS storage connectivity across these adapters in ESXi5 U3, 1489271.  So far the fix (I hope its a fix) was to update the firmware and driver to these versions:

driver: qlcnic

version: 5.1.178

firmware-version: 4.16.34

Originally we had driver 5.0.727 and firmware 4.9.x.

I found another thread on here with the same nic and poor iSCSI stability when using jumbo frames.  Going back to 1500 mtu would stabilize it for them, but then they upgraded to firmware 4.12.x and jumbo was stable for them.  That post was quite some time ago so now as you can see 4.16.34 is out.  I also installed the QLogic CIM provider on each host and the vcenter server plugin so I can also now view and manage these cards.

I made this change only a week ago but so far so good.  Here's knocking on wood...

Reply
0 Kudos
markzz
Enthusiast
Enthusiast

Arr this old thread..

I have a few comments here..

We struggled for about 6 months with the qLogic cards until with assistance from HP and qLogic we removed then carefully placing them in the bin (yep) and installed the HP branded Emulex 10Gb "NC552SFP"

They have been an excellent, stable and fast 10Gb NIC, everything the qLogic was not.. BUT I should have bought the Intel cards and will with the next batch..

OH qLogic have purchased Brocade's HBA business so Brocade HBA's are now on the black list..

The basis to the story is qLogic are cheap and there's a reason for it.. If you want stability go the Intel or Emulex 10Gb NIC's..

If you only have short cable length requirements consider CAT6a or Copper SFP cables.. They are cheap, reliable and lower latency than fiber.

Reply
0 Kudos
cypherx
Hot Shot
Hot Shot

Well its been a week now and the firmware update from 4.9 to 4.16.34 so far appears to be working. 

Is the Emulex OCe14102-NX a good card?  Hopefully these QLE3242 cards hold out with the new firmware and drivers... but its good to think ahead for when we add more vm hosts.

Reply
0 Kudos
pinkerton
Enthusiast
Enthusiast

Hi,

we are thinking about upgrading our vSphere Hosts (HP ProLiant DL380 G6) with 10GbE NICs. Are the connectivity issues with QLogic Cards (HP NC522SFP and HP NC523SFP) are solved? Back in 2012 we had massive Problems with 1GbE cards and needed to replace them since the firmware updates did not resolve the issues.

In addition: Does anybody use the Emulex Card NC552SFP with HP DL380 G6 Servers? They are not listed as compatible in the servers quickspecs but I actually don't understand why they shouldn't work since the previous model NC550SFP is compatible and the only change between the cards is the version of the Emulex chip.

Thanks
Michael

Reply
0 Kudos
woozte
Contributor
Contributor

I have a ProLiant DL380G6 and i purchased a NC523SFP for 10gb connectivity with msa2040

I have connectivity problem even after updating firmware and driver.

~ # esxcli network nic list

Name    PCI Device     Driver  Link  Speed  Duplex  MAC Address         MTU  Description

------  -------------  ------  ----  -----  ------  -----------------  ----

vmnic0  0000:002:00.0  bnx2    Up     1000  Full    xxxxxxx 1500  Broadcom Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter

vmnic1  0000:002:00.1  bnx2    Down      0  Half    xxxxx  1500  Broadcom Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter

vmnic2  0000:003:00.0  bnx2    Down      0  Half    xxxx  1500  Broadcom Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter

vmnic3  0000:003:00.1  bnx2    Down      0  Half    xxxxx 1500  Broadcom Corporation NC382i Integrated Multi Port PCI Express Gigabit Server Adapter

vmnic4  0000:00d:00.0  qlcnic  Up    10000  Full   xxxx   1500  QLogic Corp HP NC523SFP 10GbE 2-port Ethernet Server Adapter

vmnic5  0000:00d:00.1  qlcnic  Up    10000  Full    xxxxxx  1500  QLogic Corp HP NC523SFP 10GbE 2-port Ethernet Server Adapter

~ # ethtool -i vmnic4

driver: qlcnic

version: 5.5.190

firmware-version: 4.16.50

bus-info: 0000:0d:00.0

I boot my ESXi host and the links to my storage are up, after a variable time links goes down.

Rebooting server restore the correct situation, but in minutes here we are again.

Reading ESXi logs i found

2014-11-11T15:38:47.412Z cpu9:33367)<3>qlcnic 0000:0d:00.1: vmnic5:qlcnic_check_temp:5969:Device temperature 108 degrees C exceeds maximum allowed.

so seems to be a temperature problem, i am opening a ticket with hp support and i will tell you the response

Reply
0 Kudos
Eildon
Contributor
Contributor

Hi Woozte,

This discussion is scaring the pants off me!

Just about to implement our new Server/Storage/VMware environment next week. My DL380 Gen8's have NC523SFPs. Looks like I am going to run into problems, going by what you and others are saying.

Can you keep us updated as to what HP come up with on this please?

Vice versa, when I do my setup, I will let you know if I am seeing the same behaviour.

Regards,

Steve

Reply
0 Kudos
woozte
Contributor
Contributor

Hi Elidon,

i chatted with HP Support this morning and they told me to:

- install the NC523SFP in the PCI-E riser slot 2 instead of slot 1 (slot 1 is near to mainboard)

- change thermal configuration in RBSU from "Optimal Cooling" to "Increased Cooling"

My DL380G6 + MSA2040 are connected from this morning without problems, i will monitor the situation in the next days.

Bye

Reply
0 Kudos