VMware Cloud Community
CTSTech
Enthusiast
Enthusiast

Jumbo Frames?

Want to make sure I have Jumbo Frames setup correctly.

Hardware:

3x Dell R710 Servers with 6 NICs (ESXi Servers)

1x Dell R310 Server with 6 NICs (vCenter & Management)

2x Dell PowerConnect 6224 Stacking Switches

1x Dell MD3200i SAN

1x Dell MD1200 DAS

I set the MTU to 9000 on each iSCSI port on the SAN.  I set the MTU to 9216 on ports 1-12 on each Switch (SAN VLAN).

On each ESXi host, I created a vSwitch and added 4 VMKernel NICs to it (2 more are for Managment) and set the MTU to 9000 on the vSwitch.

Does each individual VMKernal NIC need MTU set?  Or if they are a part of the same vSwitch, it takes those settings?

Anything else I need to configure?

0 Kudos
13 Replies
rickardnobel
Champion
Champion

CTSTech wrote:

Does each individual VMKernal NIC need MTU set?  Or if they are a part of the same vSwitch, it takes those settings?

You will have to raise the MTU on each Vmkernel port. Much easier now on 5.0 with GUI than 4.x where you would have to remove and then readd the Vmkernel ports through the command line!

My VMware blog: www.rickardnobel.se
0 Kudos
CTSTech
Enthusiast
Enthusiast

So both the vSwitch and VMKernel ports need the MTU raised?

0 Kudos
rickardnobel
Champion
Champion

CTSTech wrote:

So both the vSwitch and VMKernel ports need the MTU raised?

Yes, that is correct. Try afterwards with vmkping from ESXi shell to test the larger frame size.

My VMware blog: www.rickardnobel.se
0 Kudos
CTSTech
Enthusiast
Enthusiast

Without any changes, I tried "vmkping -s 9000 192.168.130.101" and got all 3 ping back with 9008 bytes.

So maybe only the vSwitch needs to have the MTU raised?

0 Kudos
rickardnobel
Champion
Champion

CTSTech wrote:

Without any changes, I tried "vmkping -s 9000 192.168.130.101" and got all 3 ping back with 9008 bytes.

So maybe only the vSwitch needs to have the MTU raised?

Try vmkping -s 8900 -d 192.168.130.101

The -d option makes sure that you do not get frame fragmentation done by the IP stack.

My VMware blog: www.rickardnobel.se
0 Kudos
CTSTech
Enthusiast
Enthusiast

No, that did not work - but when I changed the MTU on the VMKernels it did.

Thanks!

0 Kudos
rickardnobel
Champion
Champion

Great! It is very easy to get confused when trying vmkping without -d, as the jumbo frames seems to work, while in reality we use normal frame sizes and doing IP fragmentation.

My VMware blog: www.rickardnobel.se
0 Kudos
spf62
Contributor
Contributor

We are having the same issue, how did you change the MTU on the vmkernals?

0 Kudos
rickardnobel
Champion
Champion

spf62 wrote:

We are having the same issue, how did you change the MTU on the vmkernals?

Do you have ESXi 5? Then it is just an change on the Vmkernel portgroup on the vSwitch.

My VMware blog: www.rickardnobel.se
0 Kudos
spf62
Contributor
Contributor

We are using ESX 4.1 update 2

0 Kudos
rickardnobel
Champion
Champion

spf62 wrote:

We are using ESX 4.1 update 2

Since this thread is in the ESXi 5 part of the forum you might want to create a new thread in the ESX 4 forum.

http://communities.vmware.com/community/vmtn/server/vsphere/esx

The actions for enabling larger MTU is a bit harder on 4.x, and involves deleting the Vmkernel interfaces and then re-create them from the command line together with parameters for jumbo frames.

My VMware blog: www.rickardnobel.se
0 Kudos
spf62
Contributor
Contributor

We now are on ESXi 5.0 and jumbo frames still doesn't work.

We are using all approved HP hardware, hosts, pro curve 2810-48g switches.

Attached is our pro curve config

0 Kudos
rickardnobel
Champion
Champion

Since you also posted this in another thread it might be easiest to keep to the discussion there.

(http://communities.vmware.com/message/1987929#1987929)

My VMware blog: www.rickardnobel.se
0 Kudos