If you can find a way to change the MTU settings of a dvSwitch on dvPortgroup level, you could it there as well...
But before you start looking for this setting: With dvSwitches you can only change the MTU settings globally at the dvSwitch level.
You have to adjust the MTU on vmkernel port level, on dvSwitch level and in the whole physical switch infrastructure.
However, nowadays it isn't recommended to activate jumbo frames only because of vSAN. Through the TSO and LRO features, traffic optimizations are already taking place and the benefit of jumbo frames is therefore rather low. In contrast, there are many possible pitfalls during configuration.
Thank you SK84,
What is the different between adjusting MTU on vmkernel or on Golable switch?
You need to configure it in both places in vSphere for jumbo frames to work (and physical switch level as well). If it isn't properly configured in one place, it will cause strange network behavior.
Simply said, there are different abstraction layers and different traffic types in the hypervisor. vmkernel ports are required for system traffic and normal port groups for VM traffic. vmkernel ports are also treated slightly differently than normal VM traffic. This serves among other things to isolate and prioritize traffic types.
However, vmkernel ports also require normal portgroups, since their traffic in the hypervisor will have to pass through the dvSwitch layer at some point.
From Sotragehub " If however there is an MTU of 1500 on the vmknic and an MTU 9000 on the physical switch" Is that mean VSAN traffic will not be fragment?
Having 1500 on all vmk interfaces and 9000 on the vSwitch + physical switch (or just the physical switch) is fine and won't cause fragmentation - however doing the opposite of this (9000 on the vmk but 1500 on the switch) will of course cause fragmentation.
There generally isn't a significant benefot to using Jumbo frames on vSAN and it adds complexity that can result in issues (e.g. someone lowers the MTU on the switch either intentionally or from non-saved configurations and reboot/upgrade and causes cluster partition).
If it is a requirement to increase MTU to 9000, then configure this on the physical switch (either globally or just the switch-ports in use for vSAN), configure the vDS/vSS to 9000, then schedule a short down-time to configure and validate this on the vmk interfaces.
If the vCenter is running on the vSAN cluster then either set these via the Host UI client or CLI e.g.:
# esxcli network ip interface set -i vmkX -m 9000
Do properly validate that full frame can pass e.g.:
# vmkping vmkX -s 8972 -d <vSAN-IP-of-other-nodes-vmk>