VMware Cloud Community
vSohill
Expert
Expert
Jump to solution

vSAN MTU size

Hi,

Should I change the MTU size for vSAN on the vDS golable configuration or on the Protgroup ?

1 Solution

Accepted Solutions
TheBobkin
Champion
Champion
Jump to solution

Hello @vSohill,

Having 1500 on all vmk interfaces and 9000 on the vSwitch + physical switch (or just the physical switch) is fine and won't cause fragmentation - however doing the opposite of this (9000 on the vmk but 1500 on the switch) will of course cause fragmentation.

There generally isn't a significant benefot to using Jumbo frames on vSAN and it adds complexity that can result in issues (e.g. someone lowers the MTU on the switch either intentionally or from non-saved configurations and reboot/upgrade and causes cluster partition).

If it is a requirement to increase MTU to 9000, then configure this on the physical switch (either globally or just the switch-ports in use for vSAN), configure the vDS/vSS to 9000, then schedule a short down-time to configure and validate this on the vmk interfaces.

If the vCenter is running on the vSAN cluster then either set these via the Host UI client or CLI e.g.:

# esxcli network ip interface set -i vmkX -m 9000

Do properly validate that full frame can pass e.g.:

# vmkping vmkX -s 8972 -d <vSAN-IP-of-other-nodes-vmk>

Bob

View solution in original post

12 Replies
sk84
Expert
Expert
Jump to solution

If you can find a way to change the MTU settings of a dvSwitch on dvPortgroup level, you could it there as well...

But before you start looking for this setting: With dvSwitches you can only change the MTU settings globally at the dvSwitch level. Smiley Wink

You have to adjust the MTU on vmkernel port level, on dvSwitch level and in the whole physical switch infrastructure.

However, nowadays it isn't recommended to activate jumbo frames only because of vSAN. Through the TSO and LRO features, traffic optimizations are already taking place and the benefit of jumbo frames is therefore rather low. In contrast, there are many possible pitfalls during configuration.

See: MTU and Jumbo Frames Considerations | VMware® vSAN™ Design and Sizing Guide | VMware

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.
vSohill
Expert
Expert
Jump to solution

Thank you SK84,

What is the different between adjusting MTU on vmkernel or on Golable switch?

Reply
0 Kudos
sk84
Expert
Expert
Jump to solution

You need to configure it in both places in vSphere for jumbo frames to work (and physical switch level as well). If it isn't properly configured in one place, it will cause strange network behavior.

Simply said, there are different abstraction layers and different traffic types in the hypervisor. vmkernel ports are required for system traffic and normal port groups for VM traffic. vmkernel ports are also treated slightly differently than normal VM traffic. This serves among other things to isolate and prioritize traffic types.

However, vmkernel ports also require normal portgroups, since their traffic in the hypervisor will have to pass through the dvSwitch layer at some point.

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.
vSohill
Expert
Expert
Jump to solution

From Sotragehub " If however there is an MTU of 1500 on the vmknic and an MTU 9000 on the physical switch" Is that mean VSAN traffic will not be fragment?

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello @vSohill,

Having 1500 on all vmk interfaces and 9000 on the vSwitch + physical switch (or just the physical switch) is fine and won't cause fragmentation - however doing the opposite of this (9000 on the vmk but 1500 on the switch) will of course cause fragmentation.

There generally isn't a significant benefot to using Jumbo frames on vSAN and it adds complexity that can result in issues (e.g. someone lowers the MTU on the switch either intentionally or from non-saved configurations and reboot/upgrade and causes cluster partition).

If it is a requirement to increase MTU to 9000, then configure this on the physical switch (either globally or just the switch-ports in use for vSAN), configure the vDS/vSS to 9000, then schedule a short down-time to configure and validate this on the vmk interfaces.

If the vCenter is running on the vSAN cluster then either set these via the Host UI client or CLI e.g.:

# esxcli network ip interface set -i vmkX -m 9000

Do properly validate that full frame can pass e.g.:

# vmkping vmkX -s 8972 -d <vSAN-IP-of-other-nodes-vmk>

Bob

ManivelR
Hot Shot
Hot Shot
Jump to solution

Bobkin helped me on this case.This is a pre-prod environment.

from my switch end,it is 9000 MTU and from VSAN DVS/vmkernel set to MTU 9000.

My Mellanox RDMA bonded 100 G NIC supports only 4096 MTU,so I tried to reconfigure the MTU as 4096 on DVS/Vmkernel port but all of sudden all the VMs went to invalid state.

1) When I started reconfiguring the MTU as 4096 on DVS,it created a mess and vCenter server plus all other VMs went to invalid state.see the below screenshot.

2) I found this article and from the command line,i changed all MTU as 4096 on all the 3 ESXi hosts.

 

It came back to normal and issue got fixed.

 

 

ManivelR_0-1672843308140.png

 

 

 

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

@ManivelR Happy to help and good to see things I shared years ago still helping vSAN users!

Gungazoo
Contributor
Contributor
Jump to solution

I had a similar issue. Would you share the article please?

advTHANKSance

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

@Gungazoo@ManivelR was just saying that the information in the above thread was helpful dealing with an MTU issue they had, they were not referencing any specific document, just the above comments.

 

You will need to be far more specific as to what issue you are facing if you want any meaningful help with it, I would advise you create a new question/thread for that purpose as opposed to replying on this very very old thread.

kastlr
Expert
Expert
Jump to solution

Hi Bob,

while I do agree that using Jumbo Frames doesn't provide a significant benefit on vSAN performance I would still recommend to use it whenever possible with vSAN and vMotion.

Simply because the physical LAN switches would benefit from jumbo frames as they don't need to handle more packets than needed.
Instead of handling 6 headers when using a MTU size of 1500 the involved switches coud free up CPU resources when handling only one large (jumbo) frame of 9000.

Similar as VAAI did free up resouces on the host side using jumbo frames would have a similar effect on the LAN switch side.

Just my 2 cents. 


Hope this helps a bit.
Greetings from Germany. (CEST)
Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hi @kastlr,

I agree that there are benefits to using Jumbo Frames, not debating that, me mentioning that it adds complexity and possible risk is just coming from the biased side of seeing it go wrong in Prod when people make uninformed changes but this is likely just 'reverse-survivor bias' e.g. no-one is calling us in support to tell us everything is working fine 😅 .

Note it isn't always some intentional change that would for instance drop MTU to 1500 on physical switch where the vSAN-vmks are set at 9000 - I have seen things in the past like switch firmware upgrades reverting this to default (which was assumedly 1500) or configuration aspects like setting 9000 MTU in run-time only and not persisting the setting.

Reply
0 Kudos
Mortuza1
Contributor
Contributor
Jump to solution

MTU on vmkernel 

Reply
0 Kudos