VMware Cloud Community
mcecconi
Contributor
Contributor

vSphere 7.0.2 vDS LACP

Hello,

testing in an environment with a single ESXi host where the vCenter VM is also running, I haven't managed to configure link aggregation to for the additional for 10G cards. The server is connected to two physical switches. vDS is version 7.0.2. The process followed to configure the LAG is described at the following article:

https://kb.vmware.com/s/article/1004048

The result is showed in the image attached. The links seem to be up, however not LACP messages are received by the physical switches.

On the ESXi host is logged the following error message

 

2021-06-14T16:01:02.861Z cpu33:2097261)------------  ------------  ------------  ------------  ------------  ------------------------------
2021-06-14T16:01:02.861Z cpu33:2097261)      min,KB        max,KB   minLimit,KB       eMin,KB   rMinPeak,KB                            name
2021-06-14T16:01:02.861Z cpu33:2097261)------------  ------------  ------------  ------------  ------------  ------------------------------
2021-06-14T16:01:02.861Z cpu33:2097261)      204800        204800            -1        204800        204800  host/vim/vmvisor/config-file-tracker
2021-06-14T16:01:02.861Z cpu33:2097261)------------  ------------  ------------  ------------  ------------  ------------------------------
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1          1092         69124                  python.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1           132           132            uwWorldStore.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1           136           136              worldGroup.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1             0         67504                      uw.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1           136           136                 vsiHeap.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1           264           792                      pt.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1           288           288              cartelheap.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1             0             0               uwshmempt.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)           0            -1            -1           136           136        uwAsyncRemapHeap.2158530
2021-06-14T16:01:02.861Z cpu33:2097261)------------  ------------  ------------  ------------  ------------  ------------------------------
2021-06-14T16:03:22.556Z cpu8:2135448)This message has repeated 964608 times: vmnic4: Tx pkt VLAN insertion is not possible since HW tagging is not enabled by FW
2021-06-14T16:08:24.176Z cpu23:2135441)This message has repeated 965632 times: vmnic3: Tx pkt VLAN insertion is not possible since HW tagging is not enabled by FW
2021-06-14T16:13:17.385Z cpu38:2135441)This message has repeated 966656 times: vmnic3: Tx pkt VLAN insertion is not possible since HW tagging is not enabled by FW
2021-06-14T16:18:17.581Z cpu4:2135448)This message has repeated 967680 times: vmnic4: Tx pkt VLAN insertion is not possible since HW tagging is not enabled by FW

 

Reply
0 Kudos
2 Replies
mcecconi
Contributor
Contributor

Disabled the firewall on the ESXi in order to prevent logging of the error message posted in the previous post, but the LAG is still not up. Attaching a VM to the PG, can see some LACPv1 traffic going to the physical switch, not sure how to determine the actual mac address of the LAG iface (if different to the physical ones).

Reply
0 Kudos
mcecconi
Contributor
Contributor

Figured out the problem, nothing to do with the LACP version. The server I was using had Emulex cards, so I tried to add another host with Intel nics and the link came up like a charm. The issue seems to be the elxnet driver.

Reply
0 Kudos