It seems with my setup (HP 1800-24G switches) I'll have to choose between either jumbo frames OR flow control for iSCSI san access. I'm currently running jumbo frames with an MTU of 9000 Bytes. Enabling flow control in addition to jumbo frames yields near zero transfer rates, so I guess its unsupported by ProCurve 1800-24G.
What option would you recommend for better performance in a 2 esx server (HP DL380G5 ESX 3.5 U2), 1 san (HP MSA 2012i, dual controller) setup?
Virtualized applications include databases (SAP, Exchange) aswell as moderate use file server (user shares, business documents).
I've read other threads on this subject but didn't come to a clear conclusion.
Best Regards, Felix Buenemann
the 1800-24 has a 500KB packet buffer size. The specs don't say if that is per port or per chassis however since the 1800-8 has 144KB buffer space I'd guess this is per chassis, that's not great news for iSCSI.
If you don't have enough buffer space per port you can run into dropped frames during periods of high throughput. Dropped frames means that TCP will have to re-transimt the frame and that is slow (relative to normal operations). I would choose to enable flow control over Jumbos, flow control will tell the end device to stop sending framesuntil the switch has had time to process those frames.
Ben
I've been told that the ProCurve 1800s are not suited for iSCSI, because the amount of internal buffers is too low. We did a project with an MSA2012i and two ProCurve 2824 some time ago. Using iometer and 32KByte sequential transfers, I could easily pull 100 MBytes/sec and push about 50-70MByte/sec without any modifications to the network stack.
the 1800-24 has a 500KB packet buffer size. The specs don't say if that is per port or per chassis however since the 1800-8 has 144KB buffer space I'd guess this is per chassis, that's not great news for iSCSI.
If you don't have enough buffer space per port you can run into dropped frames during periods of high throughput. Dropped frames means that TCP will have to re-transimt the frame and that is slow (relative to normal operations). I would choose to enable flow control over Jumbos, flow control will tell the end device to stop sending framesuntil the switch has had time to process those frames.
Ben
jumbo frames and flow control at the same time isn't supported as far as I know. atleast it wasn't the last time I did an implementation with these switches. We used flow control cause Jumbo Frames wasn't supported by VMware any way.
Duncan
If you find this information useful, please award points for "correct" or "helpful".
If you have to choose between the two, you want flow control. Even on a switch that does both flow control and jumbo frames simultaneously, VMWare doesn't officially support this configuration (last I checked, anyway). I can tell you, however, that it does work.
-Josh.
Thanks for your suggestions and explanations. I'll try switching to flow control without jumbo frames the next time I get a chance to shutdown all VMs.
Just wanted to report: I switched over from Jumbo Frames to FLow Control but didn't see a difference in performance, neither improved nor decreased .
Hm.. this thread is very interesting.
I've got HP Procurve 1800-24G model# J9028B, since it is a dumb switch, how can we select the switch mode between jumbo frame or flow control ?
Kind Regards,
AWT
Yes, you're right Mr. Epping, ESXi 4.0 doesn't support Jumbo frame
Even after i tried to enable it on my Dell MD3000i SAN and from the ESXi 4 build 164009 ssh console through direct ethernet cable connection in between (no switch), the ssh console vmkping command got no response 😐
Kind Regards,
AWT
Jumbo frames is now support from ESX 4.0:
See iSCSI config guide:
"Jumbo Frames allow ESX/ESXi to send larger frames out onto the physical network. The network must support
Jumbo Frames end-to-end for Jumbo Frames to be effective. Jumbo Frames up to 9kB (9000 Bytes) are
supported."
The new "HP ProCurve Switch 1810G-24, 24-Port (J9450A)" (successor of HP 1800G-24) supports Jumbo Frames as well as Flow Control at the same time. I have it running here in my lab and it works like a charm.