Might be a stupid question but is there any way that we can force the MTU value of 9000 to be used for VMs from the ESXi layer, rather than setting up in the guest os level. I would like to check whether it is possible to add something in the VMX file or the advanced configuration parameters for the vm vnics
Welcome to the Communicty - yes - set the MTU of the virtrual machine port group they connect to - remember to configure the physical network as well to support a 9000 MTU
Apart from setting the MTU on the Physical/vSwitch/PortGroup, we have to set the MTU on the Guest Os level right.? So, I am looking for some sort of setting to make the MTU of the vNIc to 9000 from the ESXi layer rather than changing the value in the guest operating system. Just like, we force the MAC address for a vNIC of a VM in the VMX file. I would be looking to these sort of options.
I really fail to see the point you're trying to make here. The hypervisor and vSwitch merely provide layer 2 forwarding logic. It's at the discretion of the endpoint (read: the guest OS networking stack) to generate frames of whatever size and send it down the virtual wire.
Even if the guest OS detects that a NIC/driver is capable of utilizing jumbo frames, every OS that I know of will default to the ordinary MTU of 1500 bytes unless configured otherwise. You can't expect the hypervisor to magically interfere with the networking stack of the guest to create frames of an arbitrary size without the guest OS being aware of it or being configured for.
Apart from setting the MTU on the Physical/vSwitch/PortGroup, we have to set the MTU on the Guest Os level right.? So, I am looking for some sort of setting to make the MTU of the vNIc to 9000 from the ESXi layer rather than changing the value in the guest operating system.
As noted, the MTU setting on the portgroup is only as a maximum allowed frame size at the vSwitch level, but is invisible both to the physical switches and the guest operating systems. There are no standard way to negotiate the maximum frame size at layer two (but is done on layer four within each TCP session, but that is something else.)