VMware Cloud Community
carmos
Contributor
Contributor
Jump to solution

Cannot set jumbo frames in vCenter

System setup:
ESXi (8.x) cluster with 3 hosts, each with 2x 10GB adapters (plus a few 1GB adapters)
ESXi vSwitches attached to 10GB adapters configured with MTU 9000
I use iSCSI and all storage-kernel adapters and everything on the SAN has jumbo frames working fine.
ESXi management kernel-adapter on the same 10GB adapter as vCenter

vCenter is tested as it was installed, with a vmxnet3-adapter and stating "Other Linux 6.x" as OS, like i said this is an "out of the box". Changing OS to Photon OS has no effect on MTU, it's still 1484. vCenter attached to Port group connected to a 10GB adapter.

I test max MTU from the backup-server (which also has 10GB adapters) using "ping xx.xx.xx.xx -l 8972 -f"
I can successfully set ping the ESXi-management adapter with 8972 packet size on the same host where vCenter is
I can NOT ping the vCenter with larger packet size than 1484 (which strangles my backup speed)

All i get from google is 1000 sites explaining "this is how you configure MTU in ESXi", nothing specific about vCenter.

Reply
0 Kudos
1 Solution

Accepted Solutions
kastlr
Expert
Expert
Jump to solution

Hi,

 

check out the following link.

Photon OS Network Configuration

You have to reconfigure vCSA to use Jumbo Frames, and to be honest I don't think that's a good idea.

Simply because I do see the risk that such a config change wouldn't survive an update/upgrade.

May I ask why you wanna use Jumbo Frames with vCSA?


Hope this helps a bit.
Greetings from Germany. (CEST)

View solution in original post

6 Replies
CallistoJag
Hot Shot
Hot Shot
Jump to solution

This will be the VMkernal Adaptor you are using for vCenter Server communications that effects its MTU.

Enabling Jumbo Frames on a VMkernel port from the vCenter Server
In the vSphere Web Client, navigate to the host.
On the Configure tab, click VMkernel Adapters.
Click Edit.
Set the MTU value to 9000. Note: You can increase the MTU size up to 9000 bytes.

More information can be found here: https://infohub.delltechnologies.com/l/smartfabric-services-with-multisite-vsan-stretched-cluster-de....
Reply
0 Kudos
carmos
Contributor
Contributor
Jump to solution

How am i supposed to connect vCenter (Virtual Machine)  to a kernel adapter?

vCenter is a Virtual machine and can only be connected to normal Port Groups, which is connected to a vSwitch etc.
Kernel adapters are connected to Port Groups and vSwitches, but a port group with a kernel adapter cannot be used by a VM.

Reply
0 Kudos
kastlr
Expert
Expert
Jump to solution

Hi,

 

check out the following link.

Photon OS Network Configuration

You have to reconfigure vCSA to use Jumbo Frames, and to be honest I don't think that's a good idea.

Simply because I do see the risk that such a config change wouldn't survive an update/upgrade.

May I ask why you wanna use Jumbo Frames with vCSA?


Hope this helps a bit.
Greetings from Germany. (CEST)
carmos
Contributor
Contributor
Jump to solution

From my understanding backup traffic pass through vCenter between the host and the backup server.
The backup server is Veeam so it would be using the standard VMware API to talk to vCenter

... there is nothing i would like more than to be wrong about this, it doesn't seem logical but it's the only conclusion i can reach after testing. It would make more sense if vCenter only brokered the connection and the actual backup traffic went directly between the ESXi and the backup server. If that's the case then there is indeed no need to configure jumbo frames in vCenter.

Reply
0 Kudos
kastlr
Expert
Expert
Jump to solution

Hi,

 

the backup is handled between the ESXi Server who runs/owns the VM and the backup server.

vCSA is only used to figure out on which ESXi server the VM is running, so that the backup server would reach out to the right ESXi server.

 


Hope this helps a bit.
Greetings from Germany. (CEST)
carmos
Contributor
Contributor
Jump to solution

Hey, it certainly helps and that's how i was hoping it worked, thanks for confirming it! :grinning_face_with_big_eyes:

My issue has been resolved and came down to that the ESXi-server was insisting on using vmnic0 (also vmk0) which is a 1GB adapter for management traffic even though it (vmnic0) had no vmk adapter configured to handle management and despite the fact that i had a different (10G) adapter configured to handle management. This meant that all traffick to/from the ESXi went through the 1GB vmnic.

I set up a new Port Group connected to a 10GB adapter and pointed vmk0 there and re-enabled management on it. I now have 2 management vmk adapters, each connected to a 10GB NIC. Backup traffic is now 10GB and with jumbo frames, so I'm happy.

I'm particularly happy to have discovered i perf in ESXi, using it together with iperf for windows on my backup server to verify transfer speed helped A LOT!

Thanks again for the help!

Reply
0 Kudos