VMware Cloud Community
iambrucelee
Contributor
Contributor

HP NC532i (Broadcom 57711E) network adapter from flex-10 caused a hard crash, which bnx2 driver to use?

Is anyone else having this issue? We just had 3 servers crash due to a bnx2x_panic_dump. Once the network cards crashed the ESX server had to be rebooted to come back. Even though only a few vmNICs died, the entire server became unreachable, and the VMs became unreachable, even if the vmnic wasn’t bound to the vSwitch that the VM was on.

After researching it appears that VMware supports 3 different drivers:

1. bnx2x version 1.45.20

2. bnx2x version 1.48.107.v40.2

3. bnx2x version 1.52.12.v40.3

On 6/10/2010 VMware came out with a patch for 1.45.20, but esxupdate maked it obsolete, since our version (1.52.12v40.3) was newer. Should I downgrade my driver?

Also the VMware HCL has conflicting information. According to this:

http://www.vmware.com/resources/compatibility/search.php?action=search&deviceCategory=io&productId=1...

1.52.12.v40.3 is supported by vSphere4 Update2, and not vSphere Update1, yet the U2 release only has an update for the 1.45.20 driver.

Yet according to this:

http://www.vmware.com/resources/compatibility/search.php?action=search&deviceCategory=io&productId=1...

1.52.12.v40.3 is supported by both vSphere4 Update2 and vSphere Update1.

Here are the details of my environment:

HP BL460G6 blade servers, with flex-10 modules.

The individual blades are using HP NC532i Dual Port 10GbE Multifunction BL-c Adapter, firmware bc 5.0.11.

The chassis OA itself is using firmware v3.0.

The Flex-10 module is using firmware v. 2.33.

Crash Dump:

Jun 16 17:03:54 esx-2-6 vmkernel: 0:01:03:09.131 cpu1:4426)VMotionRecv: 1080: 1276732954553852 😧 Estimated network bandwidth 75.588 MB/s during page-in

Jun 16 17:03:54 esx-2-6 vmkernel: 0:01:03:09.131 cpu7:4420)VMotion: 3381: 1276732954553852 😧 Received all changed pages.

Jun 16 17:03:54 esx-2-6 vmkernel: 0:01:03:09.245 cpu7:4420)Alloc: vm 4420: 12651: Regular swap file bitmap checks out.

Jun 16 17:03:54 esx-2-6 vmkernel: 0:01:03:09.246 cpu7:4420)VMotion: 3218: 1276732954553852 😧 Resume handshake successful

Jun 16 17:03:54 esx-2-6 vmkernel: 0:01:03:09.246 cpu3:4460)Swap: vm 4420: 9289: Starting prefault for the migration swap file

Jun 16 17:03:54 esx-2-6 vmkernel: 0:01:03:09.259 cpu0:4460)Swap: vm 4420: 9406: Finish swapping in migration swap file. (faulted 0 pages, pshared 0 pages). Success.

Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_stats_update:4639(vmnic1)]storm stats were not updated for 3 times
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_stats_update:4640(vmnic1)]driver assert
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:658(vmnic1)]begin crash dump -


Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:666(vmnic1)]def_c_idx(0xff5) def_u_idx(0x0) def_x_idx(0x0) def_t_idx(0x0) def_att_idx(0xc) attn_state(0x0) spq_prod_idx(0xf8)
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:677(vmnic1)]fp0: rx_bd_prod(0x6fe7) rx_bd_cons(0x3e9) *rx_bd_cons_sb(0x0) rx_comp_prod(0x7059) rx_comp_cons(0x6c59) *rx_cons_sb(0x6c59)
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:682(vmnic1)] rx_sge_prod(0x0) last_max_sge(0x0) fp_u_idx(0x6afb) *sb_u_idx(0x6afb)
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:693(vmnic1)]fp0: tx_pkt_prod(0x0) tx_pkt_cons(0x0) tx_bd_prod(0x0) tx_bd_cons(0x0) *tx_cons_sb(0x0)
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:697(vmnic1)] fp_c_idx(0x0) *sb_c_idx(0x0) tx_db_prod(0x0)
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[4f]=[0:deda0310] sw_bd=[0x4100b462c940]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[50]=[0:de706590] sw_bd=[0x4100b4697b80]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[51]=[0:deac2810] sw_bd=[0x4100baad8e80]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[52]=[0:de9ae390] sw_bd=[0x4100bda03f40]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[53]=[0:de3e9a90] sw_bd=[0x4100b463ecc0]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[54]=[0:3ea48730] sw_bd=[0x4100bab19100]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[55]=[0:de5b1190] sw_bd=[0x4100bda83980]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[56]=[0:ded48410] sw_bd=[0x4100bdb06080]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[57]=[0:3e3f0d10] sw_bd=[0x4100bca0f480]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.229 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[58]=[0:de742110] sw_bd=[0x4100bda35d40]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.230 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[59]=[0:de6ffc90] sw_bd=[0x4100bcab3800]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.230 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[5a]=[0:de619710] sw_bd=[0x4100b4640c40]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.230 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[5b]=[0:de627e10] sw_bd=[0x4100bcaad440]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.230 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[5c]=[0:3e455e10] sw_bd=[0x4100b462a9c0]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.230 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[5d]=[0:de3a6110] sw_bd=[0x4100bdaf1d80]
Jun 16 17:09:42 esx-2-6 vmkernel: 0:01:08:57.230 cpu1:4280)<3>[bnx2x_panic_dump:712(vmnic1)]fp0: rx_bd[5e]=[0:3e37df90] sw_bd=[0x4100b470d580]

any thoughts, suggestions ?

Reply
0 Kudos
102 Replies
Rabie
Contributor
Contributor

Mackopes, we have a very similar setup (slightly smaller), 2 enclosures, Flex-10, 20 blades in the 2 enclosures.

This is a slightly off topic question but why did you decide to run 3 seperate enclosures as opposed to 3 stacked enclosures?

We are in the process of merging the 2 enclosures and just wanted to know if there are potential issues that you have run into or reasons why you decided against the configuration.

Regards

Rabie

Reply
0 Kudos
Mackopes
Enthusiast
Enthusiast

@Rabie

Our enclosures cross site boundaries. We have spanned ESX clusters (half at one site, half at another connected via dark fiber)

So we didn't see any benifit to stacking the links (since we need VMotion to be able to get to the other enclosures at the other site)

Also, since we bought VCEM, the convenience of managing less VC Domains (assume that is one reason you are doing it?) is reduced since VCEM can manage them all centrally.

AK

Reply
0 Kudos
Rabie
Contributor
Contributor

@Mackopes,

Thanks for the feedback.

We want to contain as much of the traffic inside the blade enclosures which stacking would provide, also we would require more 'uplinks' to achieve the conectivity if we ran the enclosures seperately. (currently each enclosure runs a seperate cluster and we want to rearrage the clusters to span enclosures for redundancy)

And we dont make much changes to our blade enviroment as it's a dedicated ESXi enviroment.

Reply
0 Kudos
musicnieto
Contributor
Contributor

Hey guys,

Been heads down in a Netapp Issue that was just resolved so i wanted to jump back into the thread.  Thanks everyone for listing their configs.  Since everyone's environment is unique I am guessing that it is a particular config that will cause issues.

In my case the 1.60 driver just doesn't work for me.  My configuration is the following:

I have 2 HP Chassis's stacked together to make 1 VC domain.

I have a 20 gig trunk going from each Chassis (VC slot 1) back to 1 core Nexus 7000 switch.  I have 2 shared up links configure as Active/Active

The networking on each host is the following

2nics - Public Traffic, Control, Packet and Management (pertains to the 1000v), Service console is on these 2 nics as well.

2nics -Dedicated for Storage, includes Kernel for storage.

2nics -Dedicated for FT, includes Kernel for FT.

2nics - Dedicated for VMotion, includes Kernel for Vmotion.

The Nexus 1000v is configured with 4 uplinks mapped directly to the paired nics.

  • Uplink for Public Traffice
  • Uplink for Storage
  • Uplink for FT
  • Uplink for VMotion.

My environment is stabile as of now since i have downgraded the 1.60 driver to 1.54v2 (which is the express patch).

My only issue I am experiencing now is when i test the network failover.  I disable 1 shared uplink set and i lose connectivity to my ESX hosts.

We use manual Mac-Pinning on the 1000v cause the Flex 10 doesn't support port channels on the server nics up.

I am going to look into seeing if our Mac-Pinning is setup correctly.

Is anyone else experiencing issues with Smartlink?

Reply
0 Kudos
Erwin_Zoer
Contributor
Contributor

Hi,

I am an HP employee working as a virtualization consultant and am currently researching this issue in preparation for a VI design. In the process I have compiled some information on this rather shady subject. Please note that I am obliged to state that this post is my personal opinion and that this does not necessarily reflect HP's point of view. However, I'll try to be as accurate as I can using the information I have available.

First a short backgrounder. Device Control Channel (DCC) is required to support SmartLink in a Flex-10 environment. SmartLink is the technology which allows VC to pass on uplink status changes to downstream hosts. DCC is used to manage port speed, network assignment, and link status between Virtual Connect Flex-10 ethernet modules and their associated Flex-10 NICs. DCC provides the ability to make port speed changes, as well as network port reassignments in server profiles without shutting down the blade servers. It also provides Flex-10 NIC granularity, which allows the SmartLink feature in VC to disable individual Flex-10 downlink ports when a total upstream link failure is detected on the ports associated network.

On a hardware level DCC functionality for ESX/ESXi 4.0 requires the following:

• Virtual Connect firmware version of 2.31 (minimum, 3.15 recommended).

• NC532i/m bootcode version 5.2.7 (part of firmware update 2.2.8).

Note that the bootcode minimum version is required to allow the NIC and VC to properly communicate using DCC during the POST.

As far as I can tell at this point in time, we have two working options for ESXi 4.x using Flex-10 connectivity.

ESX 4.1 (without SmartLink)

Use the the 1.54 driver provided with the GA build. However, this does not support DCC as described in:

VMware ESX/ESXi 4.1 - Broadcom bnx2x VMware ESX Driver Version 1.54 Does Not Function With Virtual Connect Device Control Channel (DCC) and SmartLink Capability for 10 Gb Broadcom Adapters in VMware ESX/ESXi 4.1

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c02476622

Since it appears that you are using this driver (the v2 has got me puzzled) SmartLink functionality does not work and this would explain why disabling the SUS does not cascade the status back to the host pNICs. As a work-around two solutions could be used:

A. Disable SmartLink, configure each vSwitch with 3 or more pNICs as uplinks and enable beacon probing.

B. Disable SmartLink, connect 2 or more uplinks from each VC ethernet module to two physically separate switches. VC will attempt to establish a LACP trunk hence this requires that the physically separate switches are stacked like for example the Cisco 3750's. What this configuration aims to achieve is a situation where a switch failure will never cause a VC uplink failure. Hence, the uplink itself is no longer is a SPOF and SmartLink is no longer required.

Use of the newer 1.60 driver will support SmartLink. However, this version has been reported internally to cause connectivity issues as well. Hence, this does not appear to be a viable alternative at this time although people have reported successes with this driver too.

UPDATE 26/1/2011:

A workaround has been posted by VMware for the connectivity issue:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103180...

By adjusting heap memory allocations for the Broadcom bnx2x driver the connectivity issues no longer occur. Hence, SmartLink now can be used on ESX 4.1 as well. Run this command from the ESX console to set the amount of heap memory allocated by the Broadcom bnx2x driver to 36MB:

#esxcfg-module -s skb_heap_max=36000000 bnx2x

ESX 4.0 (with SmartLink)

Use the 1.52.12.v40.8 driver which can be downloaded from:

VMware ESX/ESXi 4.0 Driver CD for Broadcom NetXtreme II BCM57710, BCM57711, BCM57711E Ethernet Controllers

this driver does support SmartLink functionality.

Based on the above, my conclusions currently are:

1. If ESXi 4.1 new functionality is required, SmartLink should not (yet) be used.

2. If SmartLink functionality is required, ESXi 4.0 should be used.

I hope this writeup has succeeded in shedding some light on this subject and I welcome feedback.

Regards,

Erwin

Reply
0 Kudos
musicnieto
Contributor
Contributor

Erwin,

Great post.  I have a question in regards to disabling smartlink.  You posted the below option.

B. Disable SmartLink, connect 2 or more uplinks from each VC ethernet module to two physically separate switches. VC will attempt to establish a LACP trunk hence this requires that the physically separate switches are stacked like for example the Cisco 3750's. What this configuration aims to achieve is a situation where a switch failure will never cause a VC uplink failure. Hence, the uplink itself is no longer is a SPOF and SmartLink is no longer required.

For my core switches I have 2 nexus 7000 switches in 1 environment and in the other I have only 1 nexus 7000.

My environment where I have 2 Nexus 7000 switches I send a 20 gig trunk to each switch.  1 20 gig from 1 c7000 chassis and 1 from the other.

In my environment where I have only 1 Nexus 7000 I still have 2 20gig trunks but they go to separate line cards.

In a c7000 stacked environment where you have 2 chassis each with 2 VC modules what would you recommend for the shared uplink sets to achieve the above scenario?

Reply
0 Kudos
musicnieto
Contributor
Contributor

Hello everyone,

Hope all had an enjoyable holiday.  This discussion has kind of died down so i'm assuming everyone has finally stabilized their environment.  I just wanted to write to let everyone know that we are now seeing success with the broadcom 1.60 driver.  Smartlink is also working along with network failover in our ESX 4.1 environment.

I received an email from a VMWare engineer that I had been working with on our case and he explained to me that after I upgrade the broadcom driver to version 1.60, I need to make an adjustment to the socket buffer heap size.

I am running the latest firmware for the Virtual Connect which is 3.15.

I am running the latest firmware for the OA which is 3.11.

I am running the latest broadcom firmware which is 2.28 and it includes the BootCode 5.2.7

I am running the 1.60 Broadcom Nic Driver.

After installing the driver to 1.60 you need to run the following command on your ESX host.

Set the following driver option: "esxcfg-module -s skb_heap_max=36000000 bnx2x" to increase the socket buffer heap size.  Reboot after running the command.

After patching and running this command i tested the following:

  • Multiple Vmotions between all hosts with more than 1 VM - reported no issues or network dropouts,  (I was experiencing this issue prior to increasing the socket buffer heap size)
  • Network failover test/Smartlink test - Severing 1 Shared uplink set to the environment.  This test result was positive.  Environment remained functioning on 1 Shared Uplink Set and the ESX host recogized that the virtual nic was down.  I tested on both SUS's.  I also tested heavy vmotion traffic while 1 link was down and reported no problems.

I would say as of today i have experienced no issues.  I will continue to monitor over the next week or so and will sign off hopefully by the new year!

Reply
0 Kudos
ViFXStu
Contributor
Contributor

@musicnieto, have you seen any further issues since your last post? I'm looking to upgrade a clients 4.1 environment and am still not 100% sure which version of the driver to use.

Thanks

Reply
0 Kudos
musicnieto
Contributor
Contributor

ViFX Stu,

I haven't seen any issues to date.  I am actually going to upgrade my huge prod environment this weekend in my Montreal Datacenter.  The environment i was testing on doesn't have an extremely large VM count but when i was having issues it didn't take much network traffic to create the network drops.

I will be upgrading all of my ESX hosts residing on HP BL460c G6 blades to the latest patch level of esx 4.1 and I will install the 1.60 driver and make the mod.

If you can hang tight till Monday I will let you know how I make out.  If I run into an issue I will downgrade to 1.54 v2 of the broadcom driver.

Reply
0 Kudos
bebman
Enthusiast
Enthusiast

If using the the 1.60.50.v41.2 driver, this issue listed in KB 1031805 should be taken into consideration.  The KB seems to address what I think were the problems that musicnieto (in reply 64) was running into.  At this point, I am not finding anything on if the 1.60 driver will be patched to address THIS new issue.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103180...

NOTE: If your problem or questions has been resolved, please mark this thread as answered and award points accordingly.
Reply
0 Kudos
musicnieto
Contributor
Contributor

Hey Guys,

I just wanted to check back in with everyone to let them know how I made out with the upgrades.

@bebman- the below article is what I was experiencing.  I believe that my environment was the basis for that article.  When I had called VMWare support with the issue described in that article they had never seen the issue before with the 1.60 driver.  It took about a week until they had the work around.  What i had done until i had received the work around was use the 1.54 v2 driver.  This left me without failover in my VC domain.

So, back to the upgrade!  I have to say the process was a long one but successfull in the end.  As of today the environment is running stable.  I have 10 ESX hosts and about 250 VM's running on the cluster.

The steps I took are below.

  • I flashed the VC firmware first.  This was 4 Flex 10 modules.  This was the only place where I ran into an issue which set me back an hour.  I was able to flash 3 out of the 4 with out a blip but the lead the module in the VC domain apparently got stuck in the flash process and never rebooted so eventually the upgrade timed out.  Once the upgrade timed out it showed that the flash was only successfully for 3 of 4 modules.  (This was at 12:30am and the maintenance window started at 10pm.)  As an FYI, I completed took down all my VM's and ESX hosts for the Flash process.  To jump to the point of solving the issue i just rebooted the VC module in question.  Once it came back online it registered the latest firmware and the VC domain showed no degraded status symbol. 3.15fw is what i used.
  • I then flashed the OA Firmware of the Chassis to 3.21.  I proceeded to do this before I fixed the VC module in question.  I thought that maybe this was one of the reasons why the flash process failed. 
  • Once everything on the HP side was up to speed.  I took the following approach.  I had 3 hosts that were already patched with the latest updates, drivers, and heap setting change.  I brought those up first.  I then had 7 hosts left to patch so I brought them up 3 at a time, Vmotioned all machines off to the other 3 hosts that were already up and running and proceeded my maintenance.  I did the work in the following order.  I patched the host using Update Manager.  Once all hosts were patched with the latest updates I manually installed the 1.60 driver.  I prefer to do it this way.  Once I installed the driver I then proceeded to do the Heap setting change.  After each change i rebooted the hosts.
  • The whole process took me till about 5:30am.  This included failover testing as well.
  • Another issue I saw is that HA was acting a little funny for me.  It was constantly disabled and enabling for hosts.  The fix for me was I turned off HA on the Cluster.  Once it was off I then re-enabled it and the HA stabilized for the environment.

Everything appears stable and I hope this helps you guys.  I know it can be frustrating at times but it always feels good when a problem is resolved. (or at least i hope it is!!!!)

Good Luck,

Reply
0 Kudos
Erwin_Zoer
Contributor
Contributor

Hi Musicnieto,

Thanks for providing all the feedback. This has definitively been helpful for me and I think many others too.

Take care,

Erwin

Reply
0 Kudos
bebman
Enthusiast
Enthusiast

I have seen the same issue with the HA that musicnieto mentioned.  I think it has to due with clusters of more than 5 hosts and the hosts that are primary HA nodes.  HA configuration is supposed to promote hosts from secondary to primary when a primary goes into maintenance mode, but I also feel that this is affected by the number of host failures allowed in the cluster.  The best way I have found to prevent HA issues before they start is to place the host in maintenance mode, finish vacating all VMs if the auto-migration doesn't get them all, and then remove the host from the cluster.  Then I start the upgrade or patch for whatever is the need.  If I don't see any HA issues initially after changes to the hosts or cluster, just for good measure, I will do what musicnieto did in removing HA from the cluster and then re-adding it back.  This allows for a 'clean' HA cluster configuration settings on the hosts.

@musicnieto - Thanks for all the updates on your environment about the driver.  It has really been helpful to use as a reference.  Have you heard if there is any plans for a driver that works without changes to the heap?

NOTE: If your problem or questions has been resolved, please mark this thread as answered and award points accordingly.
Reply
0 Kudos
Krede
Enthusiast
Enthusiast

Does anyone know which bnx2x version is included in 4.1 u1 ?

Reply
0 Kudos
bebman
Enthusiast
Enthusiast

According to this, VMware is still using the 1.54.1.v41.1 out of the box and the bnx2x version 1.60.50.v41.2 is still available as download.

http://www.vmware.com/resources/compatibility/detail.php?device_cat=io&device_id=4469&release_id=25

NOTE: If your problem or questions has been resolved, please mark this thread as answered and award points accordingly.
Reply
0 Kudos
beovax
Enthusiast
Enthusiast

Ignore me, looking at the wrong cluster. Driver version remains after upgrade

Reply
0 Kudos
vmmoz
Contributor
Contributor

Ich bin bis 26.02.2010 nicht im Büro. Ich werde Ihre Nachricht nach meiner Rückkehr beantworten.

In dringenden Fällen wenden Sie sich bitte an unsere Hotline call-sued@acp.at (0316-4603-15999).

Reply
0 Kudos
sayahdo
Contributor
Contributor

Thank you for your email, I'm out of the office sick with no access to email. If this is urgent please contact Axon directly on 0800 80 60 90

Regards

Mike

Reply
0 Kudos
Rabie
Contributor
Contributor

Hi,

I noticed that VMware have release 2 new drivers for 4.0:

1.60.50.v40.3     2011/03/09

http://downloads.vmware.com/d/details/dt_esxi40_broadcom_bcm57xxx/ZHcqYnR0d3BiZCpldw==

1.62.11.v40.1     2011/03/03

http://downloads.vmware.com/d/details/dt_esxi40_broadcom_bcm57xxx_16211v401/ZHcqYnR0JWRiZCpldw==

But in typical fasion there are NO release notes with them, so no idea what they do/fix.

Anyone have any insight as to what they might contain?

Regards

Rabie

Reply
0 Kudos
Krede
Enthusiast
Enthusiast

And this for 4.1 bnx2x-1.62.15.v41  - anyone tried it yet?

Reply
0 Kudos