VMware Cloud Community
AndriusKr
Contributor
Contributor

Broadcom BCM5709 with Jumbo Frames as iSCSI adapter

Hello,

I have searched and browsed a lot on this topic, but I do not understand one thing with Broadcom BCM5709 with Jumbo Frames as iSCSI adapter - is it not supported because it is hardware issue, e.g. Broadcom "forgot" to layout reguired "larger"circuts (or more embeded controler processing power) for iSCSI support with Jumbo Frames, or is it because of software part - the driver, the ESXi host/etc.?

What I mean: is the support out of question, because hardware is not capable of that, and will not be capable in the future, (for example, even with the LOM hardware firmware update), or is it software support lacking, which may be solved in time, when ESXi/driver/iSCSI firmware on LOM evolves?

I was stuck trying to enable hardware-dependend iSCSI (and still am, despite all walk-through's), when I found out this issue, which is a design consideration, which I did not took on server hardware purchase Smiley Sad

16 Replies
nichobemo
Contributor
Contributor

I'm trying to work with some 5709's on a Dell 710... did you come to a conclusion and move on with normal frames? Did you use the swisci or did you use the card's hwiscsi offload?

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

The Broadcom 5709 in 4.1 does not support Jumbo Frames AT THIS TIME. When Broadcom wrote the driver for VMware to support HW offload, they didnt write in support for Jumbo Frames due to an issue with the way vSphere handles Jumbo Frames with HW initiator. This will be eventually fixed by Broadcom with an updated driver (no ETA yet on this from what I heard) but it will eventually happen.

I recommend using the SW initiator w/ Jumbo frames as performance will be better then HW anyway....according to many tests done by others on blogs the performance with HW offload is not enough to AT THIS TIME to use it.

I have many R610 hosts with 5709's and I use SW, if a new driver is released in a newer vSphere release then at that time I may switch, but performance has been more than acceptable.

Jonathan

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
mckenzieaj
Contributor
Contributor

s1xth, thanks for the notes, exactly the information I was searching for.  Nice to know that hardware based iSCSI jumbo frame support is in the pipeline... somewhere.

I will also be very interested to compare VM CPU loads between software and hardware iSCSI when jumbo frames are supported in the hardware driver.  One of our guys here (kudos!) has been doing some extensive testing with interesting results (M710's, EQL PS6010 SAN).  Those tests are suggesting a measurable difference in CPU load between software and hardware with an MTU of 1500 in both cases.  For test VM's -> SAN throughput testing (doing nothing else), we see double and triple the amount of CPU load for the same tests (eg 300Mhz in software compared with 150Mhz in hardware in best comparison example). Software based iSCSI using jumbo frames is typically about 3x-4x the CPU load vs hardware based iSCSI without jumbo frames, when doing this same VM->SAN throughput testing.

Our testing also shows about 2.5x - 3x the overall SAN throughput when using Jumbo frames compared to non jumbo frames.  Our question now becomes "Do we want to trade CPU usage, and potentially ready time, for the jumbo frame throughput, or is our SAN throughput sufficient using hardware without jumbo frames?"  (Need to find references mentioned above for comparison)

We also played with some Intel 82599 10Gb NIC's but these do not seem to support hardware offload, and we are keen to offload the iSCSI to hardware.

Also... anyone out there playing with 10BG NIC configs, please also be aware of the  maximum NIC configuration

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102080...

If you stray outside these guidelines, at least in our experience with broadcom NICs, VBTCH (Very Bad Things Can Happen).  Randomly.  Not necessarily immediately.

So if you were thinking of generating plenty of capacity by loading up 4 x 10Gb nics for ISCSI, 4 x 10GB nics for network, and 4 x 1Gb for management in a single host.... then perhaps reconsider a 10Gb x 2 iSCSI and 10GB x 2 network instead, based on above guidelines.

Reply
0 Kudos
ABusch
Enthusiast
Enthusiast

Anybody know if the BCM5709 with jumbo frames works now with ESX5i?

Reply
0 Kudos
admin
Immortal
Immortal

Q: Anybody know if the BCM5709 with jumbo frames works now with ESX5i?

Nope -- It wouldnt work with ESX 5.0 as well. It is not a supported configuration in 5.0 .

Message was edited by: Chitti [ Added the question in my post]

Reply
0 Kudos
cdickerson75
Enthusiast
Enthusiast

How would one know this?!?  I see no mention of this in the vSphere 5 compatability matrix's.  I only learned about it from the Equallogic MEM Release Notes.

Reply
0 Kudos
admin
Immortal
Immortal

Interesting , not sure why this is amiss in the compatibility matrix.  Btw -- which one did you refer to ? Can you point me to the compatibility matrix chart you looked at ? and the release notes from Equallogic ?

Reply
0 Kudos
SillyBit
Contributor
Contributor

Good point on the maximum NIC configuration.  We just ordered a stack of Intel x520-DA2 10Gb cards.  We will be sure not install all of them in the same box.

Reply
0 Kudos
scerazy
Enthusiast
Enthusiast

Surely a proper driver could be written by now (2 years later)? JF AND HBA not just one OR the other

Seb

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Good question!! I will find out!!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
ABusch
Enthusiast
Enthusiast

Did you find out if TOE and Jumbo-Frames are now possible?

Reply
0 Kudos
scerazy
Enthusiast
Enthusiast

Not as far as my test (and reads around forum) confirm. Very sad!

Seb

Reply
0 Kudos
ABusch
Enthusiast
Enthusiast

With the latest drivers "VMware ESXi 5.0 Driver CD for Broadcom NetXtreme II Network/iSCSI/FCoE Driver Set" Link   https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXi50-Broadcom-bnx2x-17254v502&productI...

it's working for me. If you have a look in the release notes:

Version 2.72.10 (Mar. 13, 2012)

==========================

Enhancements

------------

1. Change: Default MAX MTU support to 9000 for iSCSi offload

for ESX5.0. (Approved by VMware)

So everybody should give it a try.

Best

Alex

scerazy
Enthusiast
Enthusiast

As stated in 2 other (independen) threads about this issue, YES this driver fixes it, end of story

Pity it took so long

Seb

Reply
0 Kudos
lexalex
Contributor
Contributor

Nope Smiley Sad

ESX 4.1U3 with the latest "VMware ESX/ESXi 4.1 Driver CD for Broadcom NetXtreme II Network/iSCSI Driver Set" (for BCM5709C) (1.74.22.v41.from 1                                                     2013-01-24)

-->

Feb 16 12:56:47 ESX-SRV1 vmkernel: 0:10:09:01.112 cpu2:4247)<1>bnx2i::0x41000a802538: vmnic1 network i/f mtu is set to 9000
Feb 16 12:56:47 ESX-SRV1 vmkernel: 0:10:09:01.112 cpu2:4247)<1>bnx2i::0x41000a802538: iSCSI HBA can support mtu of 1500

Reply
0 Kudos
scerazy
Enthusiast
Enthusiast

But only for you, for anybody else it is YES

Reply
0 Kudos