VMware Networking Community
sydneyboyz23
Enthusiast
Enthusiast
Jump to solution

How to set an MTU of 1600 for test Lab at home for NSX on Vmware workstation

HI .

I am setting up a home lab to build an NSX lab . I am currently running 3 Esxi hosts on VMware Workstation 11 . All the three esxi hosts are running 4 Ethernet adapters each on them .

To setup the VXlan setup for my home lab i have modified the .vmx file of the virtual machine and modified the device to vmxnet3 and it works .

ethernet1.connectionType = "custom"

ethernet1.virtualDev = "e1000"

ethernet1.wakeOnPcktRcv = "FALSE"

ethernet1.addressType = "generated"

ethernet2.present = "TRUE"

ethernet2.vnet = "VMnet1"

ethernet2.connectionType = "custom"

ethernet2.virtualDev = "vmxnet3"  --------------------- Changed to vmxnet3

VMxnet3 adapter type is not supported for VMware workstation and also one of my host has crashed multiple times with the below error message in the log file.

2015-08-02T13:37:48.254+10:00| vcpu-4| I120: Ethernet3 MAC Address: 00:0c:29:a4:7d:8c

2015-08-02T13:37:48.261+10:00| vcpu-4| I120: VMXNET3 user: Ethernet3 Driver Info: version = 16908544 gosBits = 2 gosType = 1, gosVer = 0, gosMisc = 0

2015-08-02T13:37:48.527+10:00| vcpu-5| I120: VMXNET3 hosted: Packet has no eop, scanned 1, tx ring size 512, txd valid 0.

2015-08-02T13:37:48.527+10:00| vcpu-4| I120: Ethernet3 MAC Address: 00:0c:29:a4:7d:8c

2015-08-02T13:37:48.536+10:00| vcpu-4| I120: VMXNET3 user: Ethernet3 Driver Info: version = 16908544 gosBits = 2 gosType = 1, gosVer = 0, gosMisc = 0

2015-08-02T13:37:51.436+10:00| vmx| I120: E1000: E1000 rx ring full, drain packets.

2015-08-02T13:37:53.636+10:00| mks| I120: MKS-SWB: Number of MKSWindows changed: 1 rendering MKSWindow(s) of total 2.

2015-08-02T13:37:55.565+10:00| vmx| I120: VMXVmdbCbVmVmxExecState: Exec state change requested to state poweredOff without reset, soft, softOptionTimeout: 20000000.

2015-08-02T13:37:55.565+10:00| vmx| I120: Stopping VCPU threads...

2015-08-02T13:37:56.567+10:00| svga| I120: SVGA thread is exiting

2015-08-02T13:37:56.571+10:00| mks| I120: MKS-SWB: Number of MKSWindows changed: 0 rendering MKSWindow(s) of total 1.

2015-08-02T13:37:56.575+10:00| mks| I120: GDI-Backend: stopped by HWinMux to do window composition.

2015-08-02T13:37:56.575+10:00| mks| I120: MKS-SWB: Number of MKSWindows changed: 0 rendering MKSWindow(s) of total 0.

2015-08-02T13:37:56.575+10:00| vmx| I120: MKS thread is stopped

Can someone guide me how to setup an NSX lab on VMware workstation and use 1600 MTU for VXlan traffic. Any help is much appreciated .

1 Solution

Accepted Solutions
sydneyboyz23
Enthusiast
Enthusiast
Jump to solution

Hi ALL ,

I manage to fix this issue by separating the storage traffic to a different vswitch and NSX traffic to a different VDS.

~ # esxcfg-nics -l

Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description

vmnic0  0000:02:01.00 e1000       Up   1000Mbps  Full   00:0c:29:a4:7d:6e 1500   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

vmnic1  0000:02:04.00 e1000       Up   1000Mbps  Full   00:0c:29:a4:7d:78 1500   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

vmnic2  0000:02:05.00 e1000       Up   1000Mbps  Full   00:0c:29:a4:7d:82 1600   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

vmnic3  0000:0b:00.00 vmxnet3     Up   10000Mbps Full   00:0c:29:a4:7d:8c 1600   VMware Inc. vmxnet3 Virtual Ethernet Controller

~ # esxcfg-vswitch -l

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks

vSwitch0         1536        5           128               1500    vmnic0

  PortGroup Name        VLAN ID  Used Ports  Uplinks

  VM Network            0        0           vmnic0

  VMkernel              0        1           vmnic0

  Management Network    0        1           vmnic0

DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks

dvSwitch         1536        10          512               1600    vmnic3  --------------------- for the Network Traffic

  DVPort ID           In Use      Client

  7                   1           vmnic3

  81                  1           NSX_Controller_4be65643-37ca-468e-b00a-f711d6cc5447.eth0  --------- NSX controller node 2

  79                  1           NSX_Controller_8bb60e33-b290-4252-92b1-501aa2f83607.eth0 ---------- NSX Controller node 3

  110                 1           vmk2 --- vtep interface for the host 

  119                 1           Linux1.eth0   ---- test Linux VM

This lab works fine for me now and my advise for home lab will be to not use the same nic for storage traffic that was used for the NSX traffic.

Thanks everyone for your help.

View solution in original post

Reply
0 Kudos
15 Replies
cann0nf0dder
Enthusiast
Enthusiast
Jump to solution

I think we are taking about 2 different issues here.

1. As for the network card, you would be on the safe side to check if that card is on HCL (is this physical NIC or nested ESXi?)

2. As for MTU size this is configured when preparing host for NSX.

Once configured a  this will create VMkernel port on each host in the cluster as the VXLAN Tunnel Endpoint (VTEP) with the specified MTU (1600 default I believe)

Working with NSX - Configuring VXLAN and VTEPs - Wahl Network

Reply
0 Kudos
Sateesh_vCloud
Jump to solution

Please check below link (bit old one) for NSX setup in Workstation

NSX Home LAB Part 1 | VMware Professional Services

MTU 1600 by default created when you create Logical Switches - if you intend to perform testing between logical switches then no need to worry about MTU 1600

------------------------------------------------------------------------- Follow me @ www.vmwareguruz.com Please consider marking this answer "correct" or "helpful" if you found it useful T. Sateesh VCIX-NV, VCAP 5-DCA/DCD,VCP 6-NV,VCP 5 DCV/Cloud/DT, ZCP IBM India Pvt. Ltd
sydneyboyz23
Enthusiast
Enthusiast
Jump to solution

Hi ,

This is a Nested Esxi installed on VMware workstation . By default the VMware workstation Virtual machine uses E1000 as a Network adapter for the virtual machine which can only send standard MTU frame size of 1500 .

This is a problem as we cannot ping on the logical switch between the two VXLAN using the MTU size of 1600 as the vmkernel is sending 1600 mtu but the physical adapter E1000 can only send 1500 MTU Frame.

Host1 1 ----->>>> VMkernel Port (1600 MTU VTEP interface) --->>> E1000(1500 Mtu) and this is what caused the issue . I want to know if there is a way to configure 1600 MTU for a VM running on VMware workstation ?

Please let me know if this makes the thing clear.

Reply
0 Kudos
cann0nf0dder
Enthusiast
Enthusiast
Jump to solution

I got you :smileycool:

Poked around and there isn't many straight forward answers, it looks that MTU is hardcoded into vmnet.

vmnet MTU on host machine

However what you tried should work , renaming e1000 to vmxnet3.

To check that the driver is loaded correctly. From your ESXi hosts run the following.

#esxcfg-nics -l \ and look for vmxnet3

#vmkload_mod -s vmxnet3 \ check that the driver is loaded.

Alternative easy solution is to keep the ESX host as e1000, give it some more resource and different IP and create your ESXi host for NSX testing nested within this host.

This will allow you to use vmxnet3 as network adapter.

hope that helps.

Reply
0 Kudos
larsonm
VMware Employee
VMware Employee
Jump to solution

Looks like these guys are doing what you're trying to do.  Ignore the title...they're using NSX in VMware Workstation. 

Not able to ping the storage with an MTU of 9000

You may have better luck moving from Workstation to ESXi, and run you nested NSX environment.  That's what I've done.

Reply
0 Kudos
cann0nf0dder
Enthusiast
Enthusiast
Jump to solution

Yea, suggested that may be the best bet,

Let us know how you get on sydneyboyz23

Reply
0 Kudos
sydneyboyz23
Enthusiast
Enthusiast
Jump to solution

Hi ALL ,

I manage to fix this issue by separating the storage traffic to a different vswitch and NSX traffic to a different VDS.

~ # esxcfg-nics -l

Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description

vmnic0  0000:02:01.00 e1000       Up   1000Mbps  Full   00:0c:29:a4:7d:6e 1500   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

vmnic1  0000:02:04.00 e1000       Up   1000Mbps  Full   00:0c:29:a4:7d:78 1500   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

vmnic2  0000:02:05.00 e1000       Up   1000Mbps  Full   00:0c:29:a4:7d:82 1600   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

vmnic3  0000:0b:00.00 vmxnet3     Up   10000Mbps Full   00:0c:29:a4:7d:8c 1600   VMware Inc. vmxnet3 Virtual Ethernet Controller

~ # esxcfg-vswitch -l

Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks

vSwitch0         1536        5           128               1500    vmnic0

  PortGroup Name        VLAN ID  Used Ports  Uplinks

  VM Network            0        0           vmnic0

  VMkernel              0        1           vmnic0

  Management Network    0        1           vmnic0

DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks

dvSwitch         1536        10          512               1600    vmnic3  --------------------- for the Network Traffic

  DVPort ID           In Use      Client

  7                   1           vmnic3

  81                  1           NSX_Controller_4be65643-37ca-468e-b00a-f711d6cc5447.eth0  --------- NSX controller node 2

  79                  1           NSX_Controller_8bb60e33-b290-4252-92b1-501aa2f83607.eth0 ---------- NSX Controller node 3

  110                 1           vmk2 --- vtep interface for the host 

  119                 1           Linux1.eth0   ---- test Linux VM

This lab works fine for me now and my advise for home lab will be to not use the same nic for storage traffic that was used for the NSX traffic.

Thanks everyone for your help.

Reply
0 Kudos
Ryanware
Contributor
Contributor
Jump to solution

hello there  ,

i have some how same issue but not on  test environment  ,

i have three physical server  each has 4X10GB nice that connect to the nexus core switch , some how i can't request to our network admin to change the current MTU from 1500 to 1600 .

and i can't ping VTEP at all .

Is this because of the Existing  physical Switch MTU is 1500 ? 

Reply
0 Kudos
sydneyboyz23
Enthusiast
Enthusiast
Jump to solution

Hi ,

We need MTU 1600 to be setup on the Physical port as when the vtep interface will create a packet for vxlan with 1600 mtu the physical switch will truncate it to 1500 and hence it will fail .

The MTU setup needs to be 1600 across the network.

Here is a useful KB for you .

VMware KB: Configuring jumbo frame support on NSX for vSphere and VCNS

Thanks.

Reply
0 Kudos
VaseemMohammed
Enthusiast
Enthusiast
Jump to solution

Yes, MTU must be set to recommended 1600 on the network used by vTEP. not necessarily on the whole network.

Reply
0 Kudos
dude64
Enthusiast
Enthusiast
Jump to solution

Since this is a lab, does NSX allow MTU to be set to an MTU so that VXLAN would fit in normal 1500 byte Ethernet frame? In my case, I'm developing under Workstation 12 w/ a pair of hefty laptops connected with 1GigE. Workstation doesn't appear to support jumbo frames, and I'm trying to leverage nested ESXi on laptops for portability. Thoughts?

Thanks,

dude64

Reply
0 Kudos
larsonm
VMware Employee
VMware Employee
Jump to solution

VXLAN will not fit in a 1500 byte frame.  In order for VXLAN traffic to flow freely from nested host to nested host, the vSwitch in VMware workstation will require an MTU setting of 1600.

Reply
0 Kudos
dude64
Enthusiast
Enthusiast
Jump to solution

In looking at the vxlan IETF RFC, there's nothing that prevents smaller MTU in the standard. As I read it, if the internal Ethernet frame is set to an MTU to 1400, and 1500 for the VDS, VMkernel adapters used by NSX, then it may work, albeit a bit slower. I'm experimenting with this now as vSphere and NSX do allow me to adjust downwards the MTU manually in the UI, below the default 1600.

I'm running in vmware Workstation Pro v12.1.0 nesting Vsphere 6, and NSX 6.2.2. There doesn't seem to be a way to adjust Workstation's MTU size thus requiring this workaround

Thoughts?

Thanks,

dude64

Reply
0 Kudos
dude64
Enthusiast
Enthusiast
Jump to solution

As a follow up, setting ALL interfaces of LDR and EGS and associated  to MTU of 1400 does appear to work when using VMWare Workstation 12. So far testing w/ Windows SMB file TX show external Ethernet packets on the wire staying within 1500 bytes between two VM on different ESXi hosts running on two different VMWare Workstation physical nodes.

Reply
0 Kudos
dude64
Enthusiast
Enthusiast
Jump to solution

Raising this to top of forum to see if anyone has more ideas. I've run into a scenario (Pivotal Cloud Foundry) whereas the Workstation Pro MTU issue is impacting this lab, where installation is failing because of large packet SSH transfers that's occurring. I'm running Workstation 12.5.7, and the nested VMs of PCF which run Ubuntu v14, appear not to be discovering underlying MTU set at 1400. Has anyone figured out how to get Workstation to support MTU of 1600? Again, the vxlan RFC does allow for this it appears. Thanks,

dude64

Reply
0 Kudos