same problem here Matt..
1 x NC523 latest firmware & vmware driver - both ports connected to Cisco 3750x latest ios, DL380G6, vSphere 4.1 348481, few vm's lightly loaded host.
006248: Dec 30 19:57:23.911: %LINEPROTO-5-UPDOWN: Line protocol on Interface TenGigabitEthernet1/1/2, changed state to down
006249: Dec 30 19:57:23.945: %LINEPROTO-5-UPDOWN: Line protocol on Interface TenGigabitEthernet3/1/1, changed state to down
006250: Dec 30 19:57:24.918: %LINK-3-UPDOWN: Interface TenGigabitEthernet1/1/2, changed state to down
006251: Dec 30 19:57:25.086: %LINK-3-UPDOWN: Interface TenGigabitEthernet3/1/1, changed state to down
006252: Dec 30 19:57:36.628: %LINK-3-UPDOWN: Interface TenGigabitEthernet3/1/1, changed state to up
006253: Dec 30 19:57:36.628: %LINK-3-UPDOWN: Interface TenGigabitEthernet1/1/2, changed state to up
006254: Dec 30 19:57:38.725: %LINEPROTO-5-UPDOWN: Line protocol on Interface TenGigabitEthernet1/1/2, changed state to up
006255: Dec 30 19:57:38.742: %LINEPROTO-5-UPDOWN: Line protocol on Interface TenGigabitEthernet3/1/1, changed state to up
We have had problems with the NC522SFP for about 18 months now. Each time we upgrade the firmware and/or drivers the problems morph but never go away. We continue to see transmit timeouts, excessive Xoff pause frames, port resets, and PSOD.
Even our new ESXi 5.0 hosts with the most current NC522SFP firmware and drivers still have the problems.
We still have about 60 hosts with NC522SFP adapters.
- HP ProLiant DL380 G6, G7, and DL580 G7 servers
- NC522SFP ports connected to separate Cisco Nexus 5000 switches
- ESXi 4.1 U1, U2, and ESXi 5.0
- NC522SFP firmware = 4.0.579
- ESXi 5.0 nx_nic driver = 5.0.601
We have open and active cases with HP and VMware. Both have acknowledged a problem, but as of today we still don’t have a fix. I have lost all confidence in the in the NC522SFP.
Time to move on...
- HP ProLiant DL380 G6, G7, and DL580 G7 servers
Yeah, we started with the 523 and then tried out the 522 (made things worse). Just yesterday I replaced 4 NC523SFP with Intel X520-DA2 cards in two of our servers. I will post in about a week if the cards are stable.
That would be great I hope it goes well. I think we will need to go down this path also..
also has anyone tried the firmware that vmware state on the HCL?
Model: NC523SFP 10Gb 2-port Server Adapter VID: 1077 Device Type: Network DID: 8020 Partner Name: HP SVID: 103c Firmware Version: 4.6.31 (firmware); 4.0.702 (driver) SSID: 3733 Number of Ports: 2 ESXi 5.0 qlcnic version 5.0.727 async Footnotes : Download driver from http://www.vmware.com/download/vsphere/drivers_tools.html ESX / ESXi 4.1 U2 qlcnic version 4.0.727
It has been a week and a half and we have had no issue with the intel nics. Today I am replacing the remaining NC523SFP and shipping them back.
Best of all, HP decided to close my ticket with them this weekend, without contacting me.
I edited this post because before I mentioned turning on vmdq. I have tested on two systems, and the performance seems worse when you actually configure it instead of using it with the default setting. I recommend not messing with the vmdq setting.
Message was edited by: JonesytheGrea…
One last update. We have replaced both of our datacenters with the Intel x520-DA2 cards and after updating the drivers to the most current version, I have had no more issues. Ditching the Qlogic cards was the solution.
"They are saying that you HAVE to use their SFP's. I am not. I am using cisco SFP's, which I figured would work fine.
Their support page is pretty clear though. They dont say things like "it's not supported" or "not certified".
They flatly declare it WILL NOT WORK."
Can you plese send me the link that say this? I would like to check this out.
Just want to update eveyone on the SR 11057191404 that was opened by ManFriday. It is still open and under investigation by both Cisco and VMWare.
So this is a big issue.
We have also unfortunatly purchased the NC523SFP cards.
We have been running these cards for about a year, they have been trouble from the start.
Although there have been various firmware and driver updates these cards have intermitently suffered Link Loss issues. Generally the cards recover with in a few seconds.
A week or so ago we experienced the same link loss but this time on both cards at the same time. Of course this means production outage..
I took the plunge and upgraded one host to ESXi 5, applied the new firmware and drivers
I'd be lying if I said this had improved the situation. It's in fact much worse.
We don't suffer the Link Loss issues anymore, the cards appears fine they just don't transmit packets, OH and some how also CPU utilisation of the Host flat lines during this issue. At times the Host recovers, somethimes I have to reboot the host to get it back.
We are using the NC522SFP cards in our g6 hosts, they have been stable for the past 2 years but did not startout that way..
I'm also trialing the Emulex rebranded card the NC552SFP, so far so good..
We will need to make some hasty decisions on this issue this week, it's no longer a workable solution. The NC523SFP's need to go.
I'll get hold of a Intel X520-DA2 and trial it along side the NC552SFP.
there is a later driver for the NC523SFP (or qLogic QLE3242) available from the qLogic, the driver is available from VMWare.
This obviously means HP don't support the driver but qLogic and VMWare do..
I'll do some testing and report back.
Have you test this?
Please let us know.
I've effectively been testing the driver for about 12 hours. Unfortunatly the result is the same.
The NFS stores are periodically going off line but they do appear to recover after a few seconds. You could say it's improved but not at all workable
An extract from the logs
Lost connection to server fasdc01nfs10gb mount point
/vol/esx_aggr3_file_01/esx_aggr3_file_01_qtree mounted as
10/04/2012 10:16:57 AM
Restored connection to server fasdc01nfs10gb mount point /vol/esx
_aggr3_file_01/esx_aggr3_file_01_qtree mounted as 7327dc8f-d2c7c3a1
10/04/2012 10:17:12 AM
just a thought; your server has not exceeded the configuration maximum's has it? i.e how many NIC's in total do you have in this system?
Good though damicall, I'd not thought of that one..
The server has 16 NIC's (as in ports). I'm not sure what the supported number of NIC's is with ESXi 5 but it was 20 under ESX 4 so I can assume we are within a supported configuration
10gb nics complicate the config maximums a little bit.
Looks like the limit in 5.0 is "six 10gb and 4 1gb ports"