I was thinking about adding a second riser and moving the second card over so i could then have both in slot 1 on the riser. Currently i have both nc523sfp's on the same riser in slot 2 and slot 3. But I'm not sure this would make a difference. How do you have yours setup?
We use the NC522SFP in a number of DL585g6's.
They were quite troublesome in the early days but have been stable since their last firware and driver update.
If I recall the firmware released late last year is stable
Driver and firmware information.
~ # ethtool -i vmnic2
~ # ethtool -i vmnic0
OH another thought.
I've had reports the NC375T are part of the problem and these should also be replaced with NC365T's
The DL585g6's mentioned use NC364T addin nics (the NC364T is the early version of the NC365T both use an Intel chipset)
caledunn We do use some DL385g7's, generally they are purchased with the 2nd riser for expansion.
I would put the 2nd 10Gb nic on the 2nd riser for redundancy and to distribult the load across the buses.
In our DL585's the expansion board which adds 3 more PCIe interfaces is infact called a riser. I do the same with this where 1 of the 10Gb cards is installed in it.
To be honest it doesnt' seem to make any difference.. The NC523's still fail.
Your plan with the NC522 is sensible, BUT they run HOT HOT HOT..
I'd suggest you set your system bios to maximum cooling.. and be sure the servers are getting plenty of nice cold air on them.
Is the latest esxi driver for the nc523sfp card still 4.0.727 or is it now 4.0.739? If I go to the hardware compatibility guide I noticed its still listed as 4.0.727 but when I go to the esxi driver cd I only see 4.0.739. If I go to HP advisory page and click the download link it does take me to a page for the 4.0.727.
hardware compatibility guide:
vmware 4.0.727 page:
Vmware driver cd:
If I remember correctly
The recommended driver for ESXi 5 is 5.0.727
The recommended driver for ESX /i 4 is 4.0.727
But there are later version drivers available
For ESXi 5 qlcnic-esx50-5.0.741-635278.zip (Version 5.0.741)
I'm not sure if there is a later version than 4.0.727 for ESX 4.
Just like to add we are running the latest versions.
~ # ethtool -i vmnic12
And it's still terribly unstable.
Although if I keep my vNic MTU at 1500 i only suffer the link loss issues. If I push my vNic 9000 (the vSwitch is running an MTU of 9000) the NC523's port with these MTU values after some time won't pass packets.
Just a very quick update on this debarcle of a situation.
After a few weeks battling with HP support who seem more confused with this issue than I am, and VMWarewho were really no help at all I managed to get hold of our Enterprise account manager..
He has been very helpful and we have made significant progress by involving the local Australian Technical team and making some rather radical changes.
HP Agreed to send over 2x NC552SFP (10Gb Emulex) which replaced the 2x NC523SFP's and 2x NC365T (intel 1Gb) which replaced the 2x NC375T.
This combination has now been running for 5 days.
The new NIC's have not reported any failures, link loss, anything at all..
To put this into perspective, these servers have been running for over 12 months, there has never been a period of 5 days where they have not experienced link loss or someother nic port failure.
The onboard NC375i have been behaving better but they are still the one thing I'm not confident of. I've seen a couple of vmotions fail. I've not done any monitoring or diagnosie yet but it seems when these ports hit 90% untilisation they seem to pause and no longer pass packets. (sounds like some other qlogic nic's)
I did read a forum entry where it was stated HP and QLogic recognise there is an issue with the onboard NC375i chipset and have a replacement riser available which resolves this issue..
At this point I'm going to be kind to HP and begin discussing replaceing the NC523SFP's and NC375T's in our other 585g7's..I'm not sure if this will be a swapout or purchase situation, either way I'm just happy to see some improvement in stability.
i've also been testing with the x520-DA2, nc550sfp and nc552sfp cards and without making a single configuration change they have yet to go down.
i have several vsphere clusters i've been testing with and I have at least one esxi host with the nc523sfp cards in each cluster and the others with the intel or emulex cards. The hosts with the nc523sfp cards go down every few days but the others stay up. The only thing that changed was replacing the cards. We are moving forward with purchacing more nc552sfp cards and that will be our solution to the problem. We will just eat the cost on the 20+ cards we have. We are hoping we can reuse them with our windows servers. I would have responded earlier but i wanted to give it a couple of weeks to make sure the emulex and intel cards were stable. I'll let you know if i run into any problems with the new cards. I'll add that vmware support was actually pretty helpful for us. They didnt offer a solution but they helped troubleshoot and narrow down where the issue is and kept the ticket open.
I'd like to update this post with where we are at and the stability of the NC375T, NC523SFP, NC375i.
I would like start by saying our end game solution with these qLogic network cards has simply been to replace them. I do however have some LAB servers which still use the NC375T NIC.
It would be wonderful to report there was an achievable solution which stabilised these network cards, BUT THERE IS NOT.
qLogic have continued to release firmware and drivers in an attempt to resolve the various performance and stability issues, none the less it does not appear they have achieved an acceptable result..
My advice is to just avoid these network cards.
Emulex, Brocade and Intel chipset cards are available from HP. These may be marginally more expensive but they work. If I was asked for a recommendation.
NC552SFP are stable and fast
NC365T are again stable and fast.
Intel cards are always expensive but they simply work..
(my opinions are my own, my experience is what I share)
Are these cards related to the QLogic QLE3242 dual port 10GbE adapter?
We were having trouble maintaining new NFS storage connectivity across these adapters in ESXi5 U3, 1489271. So far the fix (I hope its a fix) was to update the firmware and driver to these versions:
Originally we had driver 5.0.727 and firmware 4.9.x.
I found another thread on here with the same nic and poor iSCSI stability when using jumbo frames. Going back to 1500 mtu would stablize it for them, but then they upgraded to firmware 4.12.x and jumbo was stable for them. That post was quite some time ago so now as you can see 4.16.34 is out. I also installed the QLogic CIM provider on each host and the vcenter server plugin so I can also now view and manage these cards.
I made this change only a week ago but so far so good. Here's knocking on wood.....