VMware Cloud Community
snowdog_2112
Enthusiast
Enthusiast

iSCSI with jumbo frames issue

I have a small lab with a mix-n-match of physical hosts, a Netgear GS108T switch, and a Netgear ReadyNAS 2100 in a storage network (i.e., isolated from LAN vSwitch).

All NIC's, switch and NAS support jumbo frames.  The ReadyNAS is configured with both NIC's in a LAG, as are the 2 ports on the switch.

I set all MTU's to 9000 (vNIC's, vSwitch, pSwitch, NAS).

I am getting horrendous performance and issues even seeing the LUN's on the NAS.

Is there something else I need to configure?

Tags (2)
0 Kudos
10 Replies
ramkrishna1
Enthusiast
Enthusiast

Hi

Welcome to the communities.

to get better performance from NAS please test vmware virtual san.

It may help you .

  

"I take a decision and make it right."
0 Kudos
snowdog_2112
Enthusiast
Enthusiast

Thanks, but any thoughts regarding the jumbo frame issue?

0 Kudos
admin
Immortal
Immortal

Hi,

Do you see any packet drops when you ping a large packet..? like anything above 8000...?

Thanks,
Avinash

0 Kudos
tomtom901
Commander
Commander

You could give vmkping a try:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100372...

And see if Jumbo frames are supported throughout the entire path.

0 Kudos
Josh26
Virtuoso
Virtuoso

to get better performance from NAS please test vmware virtual san.

This software has a lot of uses - but how is improving performance of an existing SAN one of them?

To the original question,  honestly this is a fairly consumer grade piece of equipment, and you just need to accept that it can't handle jumbo frames (regardless of what Netgear may say).

0 Kudos
zXi_Gamer
Virtuoso
Virtuoso

Well, iSCSI performance is a son of a gun when not working properly, but when configured, it would provide an optimal performance. Now, to the issue at hand, calling our usual suspects:

1. Can you confirm, if you are able to do a vmkping of a packet size of 8500 [without fragmentation]

2. Now, hoping that everything is "configured properly" on the switch side for Jumbo Frames, do take a look at here for ack issue.

VMware KB: ESX/ESXi hosts might experience read or write performance issues with certain storage arr...

3.

and issues even seeing the LUN's on the NAS.

This could be interesting, are you sure that the authentications are in proper and the ESXi does not tend to resync everytime. This you can confirm by monitoring the vmkernel logs for any suspicious messages when scanning for luns.

4. Again for ReadyNAS, there are some workarounds available here NETGEAR ReadyNAS • View topic - 4200 iSCSI performance solved getting 100 MBs per nic

Hope you have gone through the same Smiley Happy

Best of luck..

PS:

to get better performance from NAS please test vmware virtual san.

This software has a lot of uses - but how is improving performance of an existing SAN one of them?

Totally agree Josh26. VMware virtual SAN has been my nightmare for several weeks Smiley Sad

snowdog_2112
Enthusiast
Enthusiast

"this" is consumer grade - meaning the GS108T switch or the ReadyNAS?  I agree completely on the switch - it's a $60 item, but it does allow you to set frame size up to 9126.

Regarding the vmkping - I did that, and anything smaller than 1500 pings, anything between 1500 and 8784 would not ping, but did NOT give a size error.  vmkpings bigger than 8784 did give an error.  This suggests it's *kind of* working.

No authentication is needed, and setting only the vmkernel port groups back to 1500 brings everything back to a state of happiness - or at least normal operation.

I've read through those links and a few linked off them.  It sounds like MPIO is my next option.

I will mark this closed and post a follow-up when I can actually try the MPIO solution - it may not be for a while.

Thanks.

0 Kudos
tomtom901
Commander
Commander

Not working vmkping would indicate that something isn't quite set up to Jumbo Frames yet. MPIO could also do you some good. Please award helpfull answers and then close this.

Good luck with configuring your environment.

0 Kudos
zXi_Gamer
Virtuoso
Virtuoso

anything between 1500 and 8784 would not ping, but did NOT give a size error

Thats bad, since Jumbo frames on vmkernel is not configured or the packet is blocked by vSwitch.

vmkpings bigger than 8784 did give an error.  This suggests it's *kind of* working.

Can you provide the error, if possible Smiley Wink. This suggests it's *kind of* working Hmmm.... I wouldn't place the bet though, if I had any money :smileymischief:

only the vmkernel port groups back to 1500 brings everything back to a state of happiness

No jumbo frames then.. good luck on your MPIO again.. cheers..

0 Kudos
admin
Immortal
Immortal

Ensure that you read this important information about Jumbo Frames before working with them:

  • ESX/ESXi supports a maximum MTU size of 9000.

    Note: Some switch configurations for Jumbo Frames need to have an MTU set higher than 9000. For more information, see the Cisco Nexus 5000 Series NX-OS Software Configuration Guide.

  • Any packet larger than 1500 MTU is a Jumbo Frame. ESX/ESXi supports frames up to 9 Kb (9000 bytes).
  • Jumbo Frames are limited to data networking only (virtual machines and the vMotion network).
  • It is possible to configure Jumbo Frames for an iSCSI network. It is not a fully supported configuration in ESX 3.5, but it is supported in ESX/ESXi 4.x and ESXi 5.x.
  • You can enable Jumbo Frames for each vSwitch or VMkernel interface through the command line interface on your ESX host.
  • To allow an ESX host to send larger frames out onto the physical network, the network must support Jumbo Frames end to end.
  • Ensure that your NIC or LOM supports Jumbo Frames.
  • For experimental support of Jumbo Frames in ESX 3.5, these NICs are supported:

    • Intel (82546, 82571)
    • Broadcom (5708, 5706, 5709, 57710, 57711)
    • Netxen (NXB-10GXxR, NXB-10GCX4)
    • Neterion (Xframe, Xframe II, Xframe E)

  • For ESX/ESXi 4.x and ESXi 5.x, contact your NIC hardware vendor regarding support for Jumbo Frames.
  • You cannot use Jumbo Frames on a Broadcom card that is configured as a hardware initiator performing iSCSI Offload functions. You can either use Jumbo Frames or iSCSI Offload and you cannot use both together with the Broadcom adapters.

Jumbo Frames in ESXi 5.1 and later

Jumbo frames for all iSCSI adapters in vSphere 5.1 and vSphere 5.5 can be configured using the UI. This applies to Software iSCSI,Dependent Hardware iSCSI and Independent Hardware iSCSI adapters.

To enable Jumbo Frames for software and dependent hardware iSCSI adapters in the vSphere Web Client, change the default value of the MTU parameter:

  1. Browse to the host in the vSphere Web Client navigator.
  2. Click the Manage tab, and click Networking.
  3. Click Virtual Switches, and select the vSphere switch that you want to modify from the list.
  4. Click Edit Settings.
  5. On the Properties page, change the MTU parameter.

    This step sets the MTU for all physical NICs on that standard switch. The MTU value should be set to the largest MTU size among all NICs connected to the standard switch. ESXi supports the MTU size up to 9000 Bytes.

To enable Jumbo Frames for independent hardware iSCSI adapters in the vSphere Web Client, change the default value of the MTU parameter:

Use the Advanced Options settings to change the MTU parameter for the iSCSI HBA.

  1. Browse to the host in the vSphere Web Client navigator.
  2. Click the Manage tab, and click Storage.
  3. Click Storage Adapters, and select the independent hardware iSCSI adapter from the list of adapters.
  4. Under Adapter Details, click the Advanced Options tab and click Edit.
  5. Change the value of the MTU parameter.

Jumbo Frame 5.0 and earlier

Creating a Jumbo Frames-enabled vSwitch

To create a Jumbo Frames-enabled vSwitch:

  1. Log in to the ESX host console directly.
  2. To set the MTU size for the vSwitch:

    • Run this command for ESX 3.5 and ESX/ESXi 4.x:

      # esxcfg-vswitch -m MTU vSwitch#

    • Run this command for ESXi 5.0:

      # esxcli network vswitch standard set -m MTU -v vSwitch#

      Note: This command sets the MTU for all uplinks on that vSwitch. Set the MTU size to the largest MTU size among all the virtual network adapters connected to the vSwitch.

  3. To display a list of vSwitches on the host, and to check that the configuration of the vSwitch is correct:

    • Run this command for ESX 3.5 and ESX/ESXi 4.x:

      # esxcfg-vswitch -l

    • Run this command for ESXi 5.0:

      # esxcli network vswitch standard list

Configuring Jumbo Frames on a vSphere Standard Switch

To configure Jumbo Frames on a vSphere Standard Switch:

  1. Log into the vSphere Client and select the Hosts and Clusters inventory view.
  2. On the host Configuration tab, click Networking.
  3. Click Properties for the vSphere standard switch associated with the VMkernel to modify.
  4. On the Ports tab, select the VMkernel interface and click Edit.
  5. Set the MTU to 9000, and click OK.

Note: To create a Jumbo Frames-enabled vNetwork Distributed Switch and its associated VMkernel interfaces, see Enabling Jumbo Frames for VMkernel ports in a virtual distributed switch (1038827).

To create a Jumbo Frames-enabled VMkernel interface on a vNetwork Standard Switch:

  1. Log directly into the ESX host console.
  2. Obtain the current vSwitch and portgroup configuration with the esxcfg-vswitch command:

    # esxcfg-vswitch -l

  3. To create a VMkernel interface with Jumbo Frames support, we first need to create a portgroup on an existing vSwitch:

    # esxcfg-vswitch -A vmkernel_port_group_name vSwitch#

    Note: If you plan to have a vSwitch which just contains the iSCSI port group, you can specify the MTU for the vSwitch to be 9000 and need to specify the MTU of 9000 when creating the VMkernel port as well in the next step. To configure a vSwitch to use Jumbo Frames (MTU 9000):

    # esxcfg-vswitch -m 9000 vSwitch#

  4. To create a VMkernel connection with Jumbo Frame support:

    • Run this command for ESX 3.5 and ESX/ESXi 4.x:

      # esxcfg-vmknic -a -i ip_address -n netmask -m MTU portgroup_name

      Note: If the vmnic port is already created, use the command for ESX/ESXi 4.1 only:

      # esxcfg-vmknic -m 9000 portgroup_name

    • Run this command for ESXi 5.x:

      # esxcli network ip interface set -m 9000 -i vmk_interface

  5. To display a list of VMkernel interfaces, and to check that the configuration of the Jumbo Frame‐enabled interface is correct:

    • Run this command for ESX 3.5 and ESX/ESXi 4.x:

      # esxcfg-vmknic -l

    • Run this command for ESXi 5.0:

      # esxcli network ip interface list

To ensure the host is configured properly for the defined MTU size:

  1. Log in to ESX or ESXi host using SSH. For more information, see:
  2. Run this command from the ESX/ESXi host:

    # vmkping -s MTU_header_size -d  IP_address_of_NFS_or_iSCSI_server

    Where:
    -s sets the packet size
    -d indicates do not fragment the packet

    Note: Assuming header size = 216, then -s value (packet size) would be: 9000 - 216 = 8784
    Example command:

    # vmkping -s 8784 -d 192.168.1.100

  3. If you receive a response, this means that the communication is occurring at a desired MTU. If you do not receive the response, run the vmkping command without -d option:

    # vmkping -s MTU_header_size IP_address_of_NFS_or_iSCSI_server

  4. If you receive a response, this means that the configuration issue still exists, but that the large packets are being fragmented. This issue may lead to disk latency, the disruption of networking or storage for other components in your environment. Fragmenting and re-assembling packets uses a lot of CPU resources on the switch, storage processor, and ESX hosts.

  5. Verify that the ESX/ESXi host is configured properly for Jumbo Frames. Run this command:

    # esxcfg-nics -l
    Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description
    vmnic0  0000:01:00.00 bnx2        Up   1000Mbps  Full   84:2b:2b:16:c3:35 1500   Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
    vmnic1  0000:01:00.01 bnx2        Up   1000Mbps  Full   84:2b:2b:16:a3:37 1500   Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet
    vmnic2  0000:02:00.00 bnx2        Up   1000Mbps  Full   84:2b:2b:16:f3:39 9000   Broadcom Corporation PowerEdge R710 BCM5709 Gigabit Ethernet


  6. Verify that the vSwitch is configured for Jumbo Frames once you find a value under the MTU column that matches your desired MTU size.

    # esxcfg-vswitch -l
    Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
    vSwitch0         128         3           128               1500    vmnic0

      PortGroup Name        VLAN ID  Used Ports  Uplinks
      Management Network    0        1           vmnic0

    Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
    vSwitch1         128         2           128               1500    vmnic1

      PortGroup Name        VLAN ID  Used Ports  Uplinks
      VMnet VLAN 5          0        0           vmnic1

    Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
    vSwitch2         128         3           128               9000    vmnic2

      PortGroup Name        VLAN ID  Used Ports  Uplinks
      iSCSI_1               0        1           vmnic2

  7. Verify the MTU column for the vSwitch that has the VMkernel port configured on it also matches the MTU size. For more information, see iSCSI and Jumbo Frames configuration on ESX/ESXi (1007654) or Enabling Jumbo Frames for VMkernel ports in a virtual distributed switch (1038827).

    Note: If the VMkernel or vSwitch are configured for Jumbo Frames, then there is a configuration problem either on a network component such as network switches or routers, or with the storage processor.

  8. Verify that all devices between the ESX host and the storage array (including physical network switches) are configured to support Jumbo Frames for the desired MTU size.
0 Kudos