VMware Cloud Community
LordArthas1
Contributor
Contributor

Trunk is not supported on a port channel?why?

Hello,

VMware Knowledge Base

Based on this article and the fact that I tried trunking a bundled port to my dell server on port vmnic1 and 2,

server couldn't route the traffic, I think ESXi doesn't have the ability to do trunking(dot1q) on a port channel(bundled ports)?

on this article they put an example of port channeling a 6500 to a server with access port,

I don't want an access port on a port channel, I wanna do trunking(dot1q), that's not possible?

this is my switch config:

interface Vlan2

ip address 192.168.0.30 255.255.255.0

interface Vlan3

ip address 192.168.1.254 255.255.255.0

interface Port-channel1

switchport trunk encapsulation dot1q

switchport mode trunk

interface FastEthernet1/0/1(goes to vmnic1)

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode on

end

interface FastEthernet1/0/2(goes to vmnic2)

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 1 mode on

pastedImage_3.png

Server 2016 can't reach vlan 3, it can't even reach 192.168.0.30

8 Replies
andrewpilachows
Enthusiast
Enthusiast

Does your Cisco switch have L3 or is connected to a L3 switch/router? I'm also trying to figure out your vlan configuration, why is it not 192.168.0.1 255.255.255.0 and 192.168.1.1 255.255.255.0? If you are trunking to an L3 switch/router you may be not routable defining a vlan on another switch, or may be missing your default gateway/default route definition. Per the document you linked, it looks like you are missing no ip address defined on your FA and port-channel interfaces.

Not knowing if this is experimentation or design, I can tell you a real world example. Our Ops department had started out trying to marry link aggregation on Cisco switches with dvSwitches on the vCenter side. After many headaches the setup caused, they decided to take the painful route of taking out the link aggregation and only rely on teaming on the vCenter side. The built-in teaming in esxi is pretty robust as is and can handle active/passive failover or load balancing on its own per requirements.

sjesse
Leadership
Leadership

Take a look at the other posts on here about doing lacp and port channels on esxi, it hardly ever works as expected, and you generally better off doing the nic teaming method.

0 Kudos
LordArthas1
Contributor
Contributor

Can you just confirm, that if this feature is even defined in ESXi, to have a trunk over port channel, meaning carrying traffic of 2 or 4 or more different vlans in between ESXi and a switch on a bundled port(port group).

just to one thing, I haven't done anything on ESXi host other than, setting vlanID and portgroup and choosing two uplinks

I know we can have that on a single port, trunk on a single port(link) is possible, but when I bundle them, I can't send any pings to any other hosts in another vlan,
I can reach the gateway of another vlan from my esxi virtual machine but I can't reach the host inside that vlan and I set the gateway for the hosts as correct.
P.S.
whenever I try to do the PortChannel on the Cisco side other than ON, through negotiating with LACP, it won't form the portchannel on the Cisco side.
0 Kudos
andrewpilachows
Enthusiast
Enthusiast

Possible? Probably, if you precisely set up the Cisco switch and the vSwitch. Easy to set up and get running? No, you can't just throw in a port channel command and add two NICs to the vSwitch.

What is your use case to have bonded ports on the switch side?

If you are just looking for load balancing/port aggregation, don't worry about the Cisco ports, just set multiple NICs active on the vSwitch and call it a day. The vSwitch will handle load balancing sort of the same way as setting the configuration on the Cisco switch but handle it on the host software side instead of the switch hardware side. Putting multiple vNICs in a vSwitch is the same as port channel on a Cisco switch.

You cannot reach the host remotely now but your VMs have LAN access? Sounds like you may have the same physical NICs on the host running the VM Network and the Management network. If you are using the same physical NICs across multiple vSwitches, they all have to be configured the same way to allow traffic to pass through the Cisco switch. This is one reason why it's recommended to just use NIC teaming on the host, you do not have to troubleshoot the Cisco switch configuration and you can also set up different teaming policies on each vSwitch according to your requirements.

Requirements to set up link aggregation on the switch side, you have to know how to get a successful port team set up on the Cisco switch

https://kb.vmware.com/s/article/1001938 

Set up link aggregation on the host side

Configure NIC Teaming, Failover, and Load Balancing on a vSphere Standard Switch or Standard Port Gr...

Load Balancing Algorithms Available for Virtual Switches

Teaming and Failover Policy

0 Kudos
andrewpilachows
Enthusiast
Enthusiast

Resonse to your edit: It looks like you are configuring a standard vSwitch. LACP only works with a distributed virtual switch (dvSwitch) which is configured through vCenter. If this is an unmanaged host, then no you cannot set LACP on the Cisco switch, you must use regular trunked ports on the switch and use NIC teaming on the host.

0 Kudos
LordArthas1
Contributor
Contributor

They have the same config I guess, NIC1 and NIC2, I haven't changed anything on the vmnic1&2

and no, no management network take a look at these pictures plz, management network is on another portgroup

pastedImage_2.png

vSwitch0

pastedImage_0.png

vSwitch1(My Switch)

pastedImage_3.png

so on the switch I have two Virtual Interfaces(SVI), idk if you are familiar with,

the virtual interface for vlan 2 has an IP address ofc, which is the gateway of that vlan/subnet,

same goes with vlan3 has an ip address which is the gateway of vlan3

when I send a ping from virtual machine(windows server 2016) on vlan 2 to vlan3's SVI IP address, it's successfull

but if I ping a host inside VLAN 3 it won't get there, same is true, with a host inside vlan 3 it can't reach the VM itself, but it can reach the Vlan 2 SVI IP address

FYI I can reach from another port on the switch that is on vlan 2 or any other vlan to any host inside vlan 3 but not from ESXi VM, so no switch configuration issue

0 Kudos
LordArthas1
Contributor
Contributor

this is the ping reply from the VM on VLan 2

pastedImage_0.png

So then I sniffed the host on Vlan 4

and I found out that ICMP packets are actually being rcvd on that end, so the switch can actually reroute them all the way to the host, but the host can't reply back, idk why?

pastedImage_2.png

I attached the wireshark, any idea?

192.168.0.10 is the VM host on ESXi on VLAN 2

192.168.4.14 is the regular host on VLAN 4

192.168.0.30 Gateway(VLan 2 SVI)

192.168.4.200 Gateway(VLan 4 SVI)

ip routing enabled

all subnet masks are /24

I also get this on the switch when I ping from vlan 4 host to vlan 2 VM on ESXi

SW2#

*Mar  1 10:24:07.198: %ADJ-3-RESOLVE_REQ: Adj resolve request: Failed to resolve 192.168.0.10 Vlan2

*Mar  1 10:24:17.155: %ADJ-3-RESOLVE_REQ: Adj resolve request: Failed to resolve 192.168.0.10 Vlan2

*Mar  1 10:24:22.163: %ADJ-3-RESOLVE_REQ: Adj resolve request: Failed to resolve 192.168.0.10 Vlan2

0 Kudos
andrewpilachows
Enthusiast
Enthusiast

How about a screenshot of the vSwitch 1 configuration? Do you have it set to route based on IP hash? Are vmnic1 and vmnic2 both active?

Keep in mind though ESXi 5.1, 5.5, 6.0 and 6.5 support LACP on vDS only. You are configuring a standard vSwitch so you are restricted to EtherChannel. Take a look at your switch documentation for configuring an EtherChannel. You may be missing a few commands to set it up. Did you create the NIC port channel with channel-group X mode on (on Forces the port to channel without PAgP or LACP)? You also may be missing the IP address on the port channel itself. I haven't gone this in-depth into switch configuration before, can't help you much more than this, but it sounds like the vSwitch can route from the VM out to the physical switch, but the physical switch is missing some configuration that is preventing traffic to reach back to the VMs.

Catalyst 3750-X and 3560-X Switch Software Configuration Guide, Release 12.2(55)SE - Configuring Eth...

Configuring EtherChannel and 802.1Q Trunking Between Catalyst L2 Fixed Configuration Switches and a ...

0 Kudos