VMware Cloud Community
g580
Enthusiast
Enthusiast
Jump to solution

Trouble Setting MTU to 9000 on ESX host and guests

I just set the mtu to 9000 on the ESXi host.

Then I changed the MTU to 9000 on the linux guests.

Now the network fails.

Shouldn't this be supported?

1 Solution

Accepted Solutions
Dee006
Hot Shot
Hot Shot
Jump to solution

vmk0       Management Network  IPv6      fe80::a236:9fff:fe3e:cbcc               64                              a0:36:9f:3e:cb:cc 1500    65535     true    STATIC, PREFERRED

It looks like your vmk0 is using the IPV6 insist of the IPv4,Can you try to disable the IPv6 in your vmk0?

View solution in original post

15 Replies
SureshKumarMuth
Commander
Commander
Jump to solution

when we are setting jumbo frames it should be same at all nodes from source to destination.

check if you have the same MTU size in physical switch as well.

Regards,

Suresh

Regards,
Suresh
https://vconnectit.wordpress.com/
g580
Enthusiast
Enthusiast
Jump to solution

Hi Suresh,

I checked all the servers and switches.  All MTU=9000 on the NICs of the ESXi host as well as the Clients.

Also the switch ports are all set to MTU=9000.

0 Kudos
SureshKumarMuth
Commander
Commander
Jump to solution

How are you testing the connectivity, use this KB to test the connectivity

VMware KB: Testing VMkernel network connectivity with the vmkping command

If possible post the command output here

Regards,

Suresh

Regards,
Suresh
https://vconnectit.wordpress.com/
g580
Enthusiast
Enthusiast
Jump to solution

Hi Suresh:

1) I am testing via scp from a physical server (10.1.1.5) to the VM (10.1.1.42) server as root.  The ping is successful, but the SCP is not.  Which suggests there is a networking problem.

2) So using your suggestion, I ssh into the ESXi host (10.1.1.40).  Do you see a problem in the output of "esxcfg-vmknic -l"?

~ # esxcfg-nics -l

Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description

vmnic0  0000:01:00.00 tg3         Down 0Mbps     Half   90:b1:1c:57:b7:aa 1500   Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet

vmnic1  0000:01:00.01 tg3         Down 0Mbps     Half   90:b1:1c:57:b7:ab 1500   Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet

vmnic2  0000:02:00.00 tg3         Down 0Mbps     Half   90:b1:1c:57:b7:ac 1500   Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet

vmnic3  0000:02:00.01 tg3         Down 0Mbps     Half   90:b1:1c:57:b7:ad 1500   Broadcom Corporation NetXtreme BCM5720 Gigabit Ethernet

vmnic4  0000:42:00.00 ixgbe       Up   10000Mbps Full   a0:36:9f:3e:cb:cc 9000   Intel Corporation Ethernet Controller 10 Gigabit X540-AT2

vmnic5  0000:42:00.01 ixgbe       Up   10000Mbps Full   a0:36:9f:3e:cb:ce 9000   Intel Corporation Ethernet Controller 10 Gigabit X540-AT2

========================

~ # esxcfg-vmknic -l

Interface  Port Group/DVPort   IP Family IP Address                              Netmask         Broadcast       MAC Address       MTU     TSO MSS   Enabled Type

vmk0       Management Network  IPv4      10.1.1.40                          255.255.255.0   10.1.1.255 a0:36:9f:3e:cb:cc 1500    65535     true    STATIC

vmk0       Management Network  IPv6      fe80::a236:9fff:fe3e:cbcc               64                              a0:36:9f:3e:cb:cc 1500    65535     true    STATIC, PREFERRED

0 Kudos
SureshKumarMuth
Commander
Commander
Jump to solution

Management network still has the MTU size ah 1500 as per the output.

What is the vmkping output with packet size 8500 ?? Using the article run the vmkping command with size 8500 and check if it allows 8500 then try with 9000.

Regards,
Suresh
https://vconnectit.wordpress.com/
0 Kudos
g580
Enthusiast
Enthusiast
Jump to solution

Hi Suresh,

from the ssh terminal on the ESXi host = 10.1.1.40:

~ # vmkping -d -s 8500 10.1.1.5

PING 10.1.1.5(10.1.1.5): 8500 data bytes

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

--- 10.1.1.5ping statistics ---

3 packets transmitted, 0 packets received, 100% packet loss

~ # vmkping -d -s 9000 10.1.1.5

PING 10.1.1.5(10.1.1.5): 9000 data bytes

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

--- 10.1.1.5ping statistics ---

3 packets transmitted, 0 packets received, 100% packet loss

0 Kudos
SureshKumarMuth
Commander
Commander
Jump to solution

Check if you are able to successfully test with 1500 then it is a settings issue. You may have to recheck all the settings related to MTU at all the levels.

Regards,
Suresh
https://vconnectit.wordpress.com/
0 Kudos
g580
Enthusiast
Enthusiast
Jump to solution

Hi Suresh,

What do you make of the output below from 3 tests with vmkping?

~ # vmkping 10.1.1.5

PING 10.1.1.5 (10.1.1.5): 56 data bytes

64 bytes from 10.1.1.5: icmp_seq=0 ttl=64 time=0.374 ms

64 bytes from 10.1.1.5: icmp_seq=1 ttl=64 time=0.270 ms

64 bytes from 10.1.1.5: icmp_seq=2 ttl=64 time=0.219 ms

--- 10.1.1.5 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 0.219/0.288/0.374 ms

~ # vmkping -s 1500 10.1.1.5

PING 10.1.1.5 (10.1.1.5): 1500 data bytes

1508 bytes from 10.1.1.5: icmp_seq=0 ttl=64 time=0.313 ms

1508 bytes from 10.1.1.5: icmp_seq=1 ttl=64 time=0.248 ms

1508 bytes from 10.1.1.5: icmp_seq=2 ttl=64 time=0.276 ms

--- 10.1.1.5 ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max = 0.248/0.279/0.313 ms

~ # vmkping -d -s 1500 10.1.1.5

PING 10.1.1.5 (10.1.1.5): 1500 data bytes

sendto() failed (Message too long)

sendto() failed (Message too long)

sendto() failed (Message too long)

--- 10.1.1.5 ping statistics ---

3 packets transmitted, 0 packets received, 100% packet loss

0 Kudos
SureshKumarMuth
Commander
Commander
Jump to solution

First one is normal ping with 64 bytes .

Second it is able to send 1500 with size 1500

Third one you are using switch -d called as df switch in ipv4 packet, i am not sure about DF but from the output we could see that the packet size is becoming larger which cannot be sent using normal MTU size 1500. Coming back to the original question, above test concludes we are able to send packet with MTU 1500 not beyond that so jumbo frames is not properly configured.

Provide me full set up details from source to destination nodes .we can keep scp part aside first we should be able to communicate using vmkping.

Regards,

Suresh

Regards,
Suresh
https://vconnectit.wordpress.com/
0 Kudos
g580
Enthusiast
Enthusiast
Jump to solution

Hi Suresh,

So, how do I set the management port to have MTU=9000?

w.r.t. setup details:

I have a physical server 10.1.1.5 running RHEL 6.6.

I have an ESX 5.5 server (10.1.1.40) that hosts a VM RHEL 6.6 (110.1.1.42).

All have NICs set to MTU=9000.

0 Kudos
SureshKumarMuth
Commander
Commander
Jump to solution

Sample ...check for vmkernel part and follow that to set it for management network part

http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/11760...

Regards,
Suresh
https://vconnectit.wordpress.com/
0 Kudos
Dee006
Hot Shot
Hot Shot
Jump to solution

vmk0       Management Network  IPv6      fe80::a236:9fff:fe3e:cbcc               64                              a0:36:9f:3e:cb:cc 1500    65535     true    STATIC, PREFERRED

It looks like your vmk0 is using the IPV6 insist of the IPv4,Can you try to disable the IPv6 in your vmk0?

g580
Enthusiast
Enthusiast
Jump to solution

Hi Dee006:

You are right.

I just disabled IPv6 on the vmk0.  Everything is working great now.

I just wanted to let you know this.  I really appreciate you offering your thoughts on this issue.  You saved me a lot of unnecessary grief.

I will test more and post more updates.

--

Best regards,

Oscar

0 Kudos
g580
Enthusiast
Enthusiast
Jump to solution

Hi Suresh,

Thanks so much for helping me.  Working with you was very useful.

The solution turned out to be the one suggested by Dee006; see below.

Turning off the IPv6 solved the problem.

--

Best regards,

Oscar

SureshKumarMuth
Commander
Commander
Jump to solution

That's cool...cheers...

Regards,

Suresh

Regards,
Suresh
https://vconnectit.wordpress.com/
0 Kudos