VMware Cloud Community
Eudmin
Enthusiast
Enthusiast

Jumbo frames to EqualLogic

I have HP BL460c bladeservers running ESXi 4.1U1 with NC325m network cards as addon cards.  Two ports on these quad-port cards are connected to my storage network which are two Dell PowerConnect 5448 switches paired with an 8-port LAG. These switches are connected to my two EqualLogic PS5000e arrays.

My understanding is that the EqualLogic supports jumbo frames out of the box, so I haven't tried to configure it on that.  I called Dell and made sure that I had configured jumbo frame support on the switches correctly.  Seems like I do.  I then used the EqualLogic MEM installer from within vMA to setup and configure my vSwitch for storage traffic.  I set the whole thing to an MTU of 9000.  Then I restarted everything (except the EqualLogic).  The config still looked right, but I never connect to the EqualLogic at more than standard-sized frames.  Also, vmkping -d -s 9000 to the EqualLogic doesn't work.

I've read on the HP site that on this network card, which uses a BroadCom 5715s chip, you can't use Large Send Offload, LSO, and Jumbo frames at the same time.  Perhaps that's what is happening.  I didn't enable it, but maybe it's enabled by default.  I haven't found a way within ESX to disable that feature if that's what's preventing me.

Any ideas on how to get jumbo frames working?  It's the last piece of my puzzle for getting the whole thing running to the EqualLogic recommended optimal settings.

Also, I've enabled it for the vMotion vSwitches and vmk.  How would I test that it's working for vMotion?  Can I force a vmkping to use a specific vmnic when it sends out a ping to a vMotion IP address?

Reply
0 Kudos
17 Replies
howit
Contributor
Contributor

Just to make sure, you have changed both vswitch and vmknic MTU to 9000, also assigned the vmknic to the software initiator, right?

In the case you are using hardware HBA i dont think jumbo frames is suported yet, in the case you are using the software initiator, let's check a couple things:

- Log into the console / ssh.

run the command-

esxcfg-vswitch -l

the output should show that your vswitch has jumbo frames enabled.

then list your vmknics, that should also say 9000 on the MTU

esxcfg-vmknic -l


I'm not sure if we're allowed to post links in here, but this guide is pretty good
for setting jumbo frames and multipath, it works fine for me.



http://www.definit.co.uk/2010/11/12/configuring-software-iscsi-multipathing-on-esxi-4-1/


Wish I could help somehow.

Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

Equallogic is already configured for Jumbo Frame.

But switch (expecially the Dell PowerConnect) no, and usually they require a COLD reboot to enable it.

About host side, both vSwitch AND vmkernel interfaces must be enabled to MTU 9000.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
howit
Contributor
Contributor

Yes, you should definitelly check the physical switch, I just remembered I forgot to set mine when I was doing that.

Reply
0 Kudos
Eudmin
Enthusiast
Enthusiast

Thanks for the link.  I'd read it.  I feel like I've read everything on jumbo frames and esx in the last few days.  My vswitches and vmknic and switches all say that Jumbo Frames is enabled.  As I said, I used the EqualLogic MEM installer and specified MTU of 9000.  It seemed to set it up correctly.

Strangely, I've got a Windows 2008 R2 virtual machine running which has a vmxnet3 interface on the storage network.  It has to be using the very same NICs to connect, and it logs in to the EqualLogic with jumbo frames just fine, so that assures me that the switches are working.  I would think that the vmk's are working right, but I don't know if the iSCSI requests from VM's go through those vmk interfaces.

FYI, here's the INFO log from the array showing the VM connecting to the EqualLogic which shows that the VM connects using Jumbo Frames:

Level: INFO
Time:  5/9/11 9:19:08 AM
Member:  flute
Subsystem:  MgmtExec
Event ID:  7.2.14
iSCSI login to target '192.168.235.104:3260, iqn.2001-05.com.equallogic:0-8a0906-787dbc502-ff9000ee7fc4c1a6-sqldata' from initiator '192.168.235.14:58448, iqn.1991
-05.com.microsoft:vcenter.mydomain.com' successful, using Jumbo Frame length.

and here the INFO log for the ESXi host that this VM is running on logging in earlier:

Level: INFO
Time:  5/7/11 8:48:18 PM
Member:  flute
Subsystem:  MgmtExec
Event ID:  7.2.47
iSCSI login to target '192.168.235.101:3260, iqn.2001-05.com.equallogic:0-8a0906-4768c0a03-4d67da799124dc2d-vmwareservers' from initiator '192.168.235.220:56211,
iqn.1998-01.com.vmware:blade2-095bfd6c' successful using standard-sized frames. 
NOTE: More than one initiator is now logged in to the target.

Level: INFO
Time:  5/7/11 8:48:18 PM
Member:  flute
Subsystem:  MgmtExec
Event ID:  7.2.47
iSCSI login to target '192.168.235.101:3260, iqn.2001-05.com.equallogic:0-8a0906-4768c0a03-4d67da799124dc2d-ncnrservers' from initiator '192.168.235.220:56211,
iqn.1998-01.com.vmware:blade2-095bfd6c' successful using standard-sized frames. 
NOTE: More than one initiator is now logged in to the target.
Reply
0 Kudos
Eudmin
Enthusiast
Enthusiast

Oh, and here's the output of my esxcfg-nics -l command which shows my jumbo MTU config:

Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description

vmnic4  0000:19:04.00 tg3         Up   1000Mbps  Full   78:e7:d1:5a:fd:7c 9000   Broadcom Corporation NC325m PCIe Quad Port Adapter
vmnic5  0000:19:04.01 tg3         Up   1000Mbps  Full   78:e7:d1:5a:fd:7d 9000   Broadcom Corporation NC325m PCIe Quad Port Adapter
vmnic6  0000:1b:04.00 tg3         Up   1000Mbps  Full   78:e7:d1:5a:fd:7e 9000   Broadcom Corporation NC325m PCIe Quad Port Adapter
vmnic7  0000:1b:04.01 tg3         Up   1000Mbps  Full   78:e7:d1:5a:fd:7f 9000   Broadcom Corporation NC325m PCIe Quad Port Adapter

Reply
0 Kudos
howit
Contributor
Contributor

I'd like to add that jumbo frames only works on vmxnet3.

have you checked your Dell PowerConnect 5448 to see if it has jumbo frames enabled?

Reply
0 Kudos
Eudmin
Enthusiast
Enthusiast

Yes, the switches have it enabled.  As I said, a VM which is using those switches is actually going end-to-end with Jumbo Frames.  As I posted, even the EqualLogic sees that it connected with Jumbo frames.

Reply
0 Kudos
howit
Contributor
Contributor

when you use the software initiator (youre using vmware's software initiator, right?) to connect to the storage, what does equalogic logs say?

Reply
0 Kudos
Eudmin
Enthusiast
Enthusiast

Look up at the end of my post just above.  At the end I pasted in the message from the EqualLogic when the VMware iSCSI initiator connects.  Standard-sized frames.

Reply
0 Kudos
howit
Contributor
Contributor

sorry, my mistake. Allow me a couple minutes, then i will start my tests using jumbo frames, maybe we join the same boat, hehe..

Reply
0 Kudos
Eudmin
Enthusiast
Enthusiast

seems like the test is to do vmkping -d -s 8000 192.168.235.101 or whatever IP address you've got the equallogic on.  -d sets the don't fragment bit and -s 8000 sets the size.  There's some confusion about Jumbo Frames being exactly 9000 bytes because of the overhead involved.  I think technically anything over 1500 is jumbo, so just in case there was some overhead problem I tried it with 8000 byte frames and it still fails.

~ # vmkping -d -s 8000 192.168.235.101
PING 192.168.235.101 (192.168.235.101): 8000 data bytes
sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

I logged a Sev 4 tech service case with vmware since I have production support, and they have until 6pm to respond.  I'll update this post if we get it worked out.

Reply
0 Kudos
howit
Contributor
Contributor

try to ping your windows vm for jumbo frames...

Reply
0 Kudos
Eudmin
Enthusiast
Enthusiast

Good idea.  Didn't work.  Message too long.

Reply
0 Kudos
howit
Contributor
Contributor

my own tests:

~ # vmkping -d -s 8000 172.16.187.1
PING 172.16.187.1 (172.16.187.1): 8000 data bytes
8008 bytes from 172.16.187.1: icmp_seq=0 ttl=64 time=0.214 ms
8008 bytes from 172.16.187.1: icmp_seq=1 ttl=64 time=0.445 ms
8008 bytes from 172.16.187.1: icmp_seq=2 ttl=64 time=0.168 ms

~ # ping 172.16.187.1 -s 8000
PING 172.16.187.1 (172.16.187.1): 8000 data bytes
8008 bytes from 172.16.187.1: icmp_seq=0 ttl=64 time=0.226 ms
8008 bytes from 172.16.187.1: icmp_seq=1 ttl=64 time=0.184 ms
8008 bytes from 172.16.187.1: icmp_seq=2 ttl=64 time=0.202 ms

~ # esxcfg-vmknic -l

vmk2       iSCSI1              IPv4      172.16.187.100                          255.255.255.0   172.16.187.255  00:60:56:72:e8:f5 9000    65535     true    STATIC
vmk3       iSCSI2              IPv4      172.16.187.101                          255.255.255.0   172.16.187.255  00:60:57:7b:bf:1c 9000    65535     true    STATIC

~ # esxcfg-vswitch -l

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch1         128         7           128               9000    vmnic2,vmnic3

  PortGroup Name        VLAN ID  Used Ports  Uplinks
  ISCSI 1 SERVER        0        1           vmnic2
  ISCSI 2 SERVER        0        1           vmnic3
  iSCSI2                0        1           vmnic3
  iSCSI1                0        1           vmnic2

physical switch - check

vswitch - check

vmnic - check

vmknic - check

I dont see a reason why yours isnt working...

Reply
0 Kudos
chimera
Contributor
Contributor

Hi,

Interesting post. I'm going through issues with a 10GbE PS6010XV iSCSI SAN at the moment, where read latency is intermittently quite high. I'm putting it down to either broadcom 57711 driver in VMWare (ESX4.1U1 as well) or possibly firmware, or something to do with jumbo frames not working properly (hence how i came across this post) - anyways...

Question - what MTU size have you set on each access port on the switch(es)? According to Dell it should be 9216.  My understanding is that MTU size at each end device (ESX and EqualLogic) should be 9000, and the switch given a higher MTU size to allow for any overhead - why I'm unsure, as I thought that MTU was the maximum transmission unit *including* protocol overheads as has been outlined above.

I get a reply when doing a vmkping of up to a maximum 8972 bytes, fails with any size larger.

Perhaps check your MTU setting on the switch for each port?

Cheers

Reply
0 Kudos
tdubb123
Expert
Expert

I am getting the same problem here except I am on dell hardware. jumbo has been enabled on dvswitch, vmknics.

I am using both broadcom NetEtreme II BCM5708 and Intel 82575B

have you found a solution to this? mtu size has been set to 9000 on the cisco side and the equalogic already at 9000

any vmkpings larger than 1500 is not working

Reply
0 Kudos
glennoz
Contributor
Contributor

i had the same issue a reboot of the switches fixed it

Reply
0 Kudos