VMware Cloud Community
AddproErik
Contributor
Contributor

Nested ESXi 5.5 dVS networking issue

Hey,

I just upgraded a few hosts in my virtual home lab from ESXi 5.1 to 5.5 and they all lost their networking connectivity after the upgrade. The nested hosts were configured with jumbo frames (MTU 9000) and they were all connected to a dvswitch configured for jumbo frames (MTU 9000). Before the upgrade it was fully working with no issues.

After the upgrade the nested hosts could no longer get on the network at all. After some troubleshooting and messing around I found that lowering the MTU on the nested hosts to anything below 8800 restored connectivity.

The physical ESXi host running the nested hosts have also been upgraded to ESXi 5.5 but the issue was the same with both versions on the physical host.

The issue was also the same with dvswitch version 5.1 and 5.5

I can still use jumbo frames on nested hosts if using a regular vswitch

Does anyone know if anything in the network stack has changed in ESXi 5.5 that could cause this behavior?

Thanks

Reply
0 Kudos
7 Replies
grace27
Enthusiast
Enthusiast

Hi

Welcome to the communities.

If it works after restoring MTU below 8800 then nothing more to test as that is the max speed.

The worst enemy to creativity is self-doubt.
Reply
0 Kudos
Duco
Enthusiast
Enthusiast

I already reported this issue during the Beta, never figured out if it can be fixed

See Re: Issue with jumbo frames on (virtual) esxi server

Duco

Reply
0 Kudos
Duco
Enthusiast
Enthusiast

Erik, figured out this issue (at least in my lab) only happens when the nested ESXi server is configured with e1000 adapters on the physical host. If I configure the VM with e1000e or vmxnet3, jumbo frames work without any problems.

Can you confirm you use e1000 adapters and the problem disappears if you replace them by e1000e or vmxnet3?

Duco

Reply
0 Kudos
AddproErik
Contributor
Contributor

Duco,

Yes I have been using the e1000 on my nested hosts. I will change them out and verify if it fixes my issue as well. Sounds promising!

Thanks!

Best Regards

Erik Jeansson

Reply
0 Kudos
Duco
Enthusiast
Enthusiast

Seem to be having some other issues now with vmxnet3 (can sent 8972 byte packets now, but connection to storage is not stable ....) will do some additional tests tomorrow and let you know

Duco

Reply
0 Kudos
Duco
Enthusiast
Enthusiast

Hmmmm, although icml worked on the nested esxi server using 8972 byte packets, tcp did not work as expected, not even on switches that where configured for 1500 bytes backed by vmxnet3 adapters ....

vMotions failed, access to the nested esxi host was lost etc ....

Using ethtool -S I notices TSO was used, so ried to disable TSO, but that does not seem to be supported

So I now changed the setup to use e1000e nice, and up to now that seems to work without any issues (and is a mic that can be selected when guest OS type is set to ESX 5.x)

So skip testing with vmxnet3 and test e1000e instead

Duco

Reply
0 Kudos
Duco
Enthusiast
Enthusiast

See Issue with jumbo frames after upgrading nested ESXi servers in the lab to 5.5 and fix where I describe the issue and the fix

Lemme now if it works for you as well

Duco

Reply
0 Kudos