VMware Cloud Community
HarisB
Contributor
Contributor

ESX 3.5i, Windows Storage Server, iSCSI jumbo frames settings, is this ok?

Hey all,

I've followed this document in order to configure jumbo frames on my ESX 3.5i host and iSCSI target:

I have this output from ESX 3.5i host:

What confuses me is 1500 MTU on vmk2 (circled), but I get 9000 MTU ping replies from my WSS iSCSI target (and get replies the other way around, pinging ESX from WSS with ping -l 9000 172.16.2.11.

I have tried removing vSwitch2 completely and recreating it using instructions in the above doc (there are some useful comments too), and got to the point where everything in screenshot has 9000 MTU, but then I can't ping WSS or the other way around.

WSS and ESX are connected using crossover cable so there is no switch config to blame. WSS has one interface for management network and one for iSCSI, so there is no other traffic involved.

I need to confirm if my transfers use jumbo frames or not.

Any ideas welcome.

Thanks

0 Kudos
4 Replies
Paul_Lalonde
Commander
Commander

Out of curiosity, what happens when you PING with the -f option (don't fragment)? Do you get ping replies in either case then?

I've been able to get jumbo frames working fine with ESX and iSCSI storage. Have you configured the GigE NICs on your WSS with jumbo frames as well?

Paul

HarisB
Contributor
Contributor

There we go, I get :

Pinging 172.16.2.11 with 9000 bytes of data:

Packet needs to be fragmented but DF set.

So it's not working.

I've updated Intel PRO/1000 MT drivers on WSS and enabled jumbo frames, still not going through. I have in the meantime toasted previous ESXi installation and have fresh install, and will try to get those settings configured again per the doc I posted above.

Based on your experience, anything I should be on the lookout for, tips and tricks?

Thanks

0 Kudos
scott_lowe1
Contributor
Contributor

When creating the VMkernel NIC using esxcfg-vmknic, you'll need to manually specify the maximum MTU size with the "-m 9000" parameter. Setting the vSwitch to an MTU of 9000 is not enough. Without explicitly specifying the MTU, esxcfg-vmknic will create a VMkernel NIC with an MTU size of 1500.

It appears that vmkping can't force packets not to be fragmented, so I'd create a secondary Service Console interface on that same vSwitch and use regular ping to test MTU size by setting the no fragment bit in the ping options.

Hope this helps!

0 Kudos
chiznitz
Contributor
Contributor

I'm having a similar issue. Have you tried your ping as vmkping -s 8991 ip ? When I do this it works fine.

I followed the same blog 100% correctly and everything looks to be configured correctly, however, when you do a transfer the speed is horrible while jumbo's are enabled. When I disable Jumbo frames the speed goes up and becomes consistent. With Jumbo frames we're seeing very spikey behavior between 10% and 60% network usage on a 100mbit client PC.

MD3000i set with 9000

2 vswitch's set at 9000

2 nics set at 9000

Cisco 2960 set at system mtu jumbo 9000

Anyone else able to shed some light on this?

0 Kudos