I'm trying to work with some 5709's on a Dell 710... did you come to a conclusion and move on with normal frames? Did you use the swisci or did you use the card's hwiscsi offload?
The Broadcom 5709 in 4.1 does not support Jumbo Frames AT THIS TIME. When Broadcom wrote the driver for VMware to support HW offload, they didnt write in support for Jumbo Frames due to an issue with the way vSphere handles Jumbo Frames with HW initiator. This will be eventually fixed by Broadcom with an updated driver (no ETA yet on this from what I heard) but it will eventually happen.
I recommend using the SW initiator w/ Jumbo frames as performance will be better then HW anyway....according to many tests done by others on blogs the performance with HW offload is not enough to AT THIS TIME to use it.
I have many R610 hosts with 5709's and I use SW, if a new driver is released in a newer vSphere release then at that time I may switch, but performance has been more than acceptable.
s1xth, thanks for the notes, exactly the information I was searching for. Nice to know that hardware based iSCSI jumbo frame support is in the pipeline... somewhere.
I will also be very interested to compare VM CPU loads between software and hardware iSCSI when jumbo frames are supported in the hardware driver. One of our guys here (kudos!) has been doing some extensive testing with interesting results (M710's, EQL PS6010 SAN). Those tests are suggesting a measurable difference in CPU load between software and hardware with an MTU of 1500 in both cases. For test VM's -> SAN throughput testing (doing nothing else), we see double and triple the amount of CPU load for the same tests (eg 300Mhz in software compared with 150Mhz in hardware in best comparison example). Software based iSCSI using jumbo frames is typically about 3x-4x the CPU load vs hardware based iSCSI without jumbo frames, when doing this same VM->SAN throughput testing.
Our testing also shows about 2.5x - 3x the overall SAN throughput when using Jumbo frames compared to non jumbo frames. Our question now becomes "Do we want to trade CPU usage, and potentially ready time, for the jumbo frame throughput, or is our SAN throughput sufficient using hardware without jumbo frames?" (Need to find references mentioned above for comparison)
We also played with some Intel 82599 10Gb NIC's but these do not seem to support hardware offload, and we are keen to offload the iSCSI to hardware.
Also... anyone out there playing with 10BG NIC configs, please also be aware of the maximum NIC configuration
If you stray outside these guidelines, at least in our experience with broadcom NICs, VBTCH (Very Bad Things Can Happen). Randomly. Not necessarily immediately.
So if you were thinking of generating plenty of capacity by loading up 4 x 10Gb nics for ISCSI, 4 x 10GB nics for network, and 4 x 1Gb for management in a single host.... then perhaps reconsider a 10Gb x 2 iSCSI and 10GB x 2 network instead, based on above guidelines.
Anybody know if the BCM5709 with jumbo frames works now with ESX5i?
Q: Anybody know if the BCM5709 with jumbo frames works now with ESX5i?
Nope -- It wouldnt work with ESX 5.0 as well. It is not a supported configuration in 5.0 .
Message was edited by: Chitti [ Added the question in my post]
How would one know this?!? I see no mention of this in the vSphere 5 compatability matrix's. I only learned about it from the Equallogic MEM Release Notes.
Interesting , not sure why this is amiss in the compatibility matrix. Btw -- which one did you refer to ? Can you point me to the compatibility matrix chart you looked at ? and the release notes from Equallogic ?
Good point on the maximum NIC configuration. We just ordered a stack of Intel x520-DA2 10Gb cards. We will be sure not install all of them in the same box.
Surely a proper driver could be written by now (2 years later)? JF AND HBA not just one OR the other
Good question!! I will find out!!
Did you find out if TOE and Jumbo-Frames are now possible?
Not as far as my test (and reads around forum) confirm. Very sad!
With the latest drivers "VMware ESXi 5.0 Driver CD for Broadcom NetXtreme II Network/iSCSI/FCoE Driver Set" Link https://my.vmware.com/group/vmware/details?downloadGroup=DT-ESXi50-Broadcom-bnx2x-17254v502&productId=229
it's working for me. If you have a look in the release notes:
Version 2.72.10 (Mar. 13, 2012)
1. Change: Default MAX MTU support to 9000 for iSCSi offload
for ESX5.0. (Approved by VMware)
So everybody should give it a try.
As stated in 2 other (independen) threads about this issue, YES this driver fixes it, end of story
Pity it took so long