VMware Cloud Community
Jock8186
Enthusiast
Enthusiast

Cisco Switchport Trunk with Nested ESXi 6

Guys,

         I'm having an issue with establishing connectivity between a vPortgroup within a nested ESXi to a Cisco catalyst 3550 trunk port. I'll try to paint the picture in text, however I have attached a few snapshots of my catalyst trunk port config, bare metal (BM) esxi config and my nested esxi config (vESXi):

I have a required for two vlans (portgroups) within my nested vESXi host to be capable of passing traffic to my L3 switch on the same physical uplink (Gi0/5):

-  I have configured port Gi0/5 on my Cisco Catalyst switch to trunk mode, encapsulation dot1q and to allow vlan 50 & 40.

- I have a Bare Metal ESXi. This host has a virtual standard switch (vSwitch1) configured with a port group called (DC1_VMTx). This port group has promiscuous mode set to "accept" and vlan 50 tagged.

- The only active uplink for this PG is physical NIC (vmnic3). This is physically connected to a trunk port Gi0/5 on my catalyst switch.

- Attached to this portgroup (DC1_VMTx) is a management VM (windows 😎 which has an IP in the 192.168.50.x/24 range. It can pass traffic and ping its DG/SVI to my L3 no problem.

- Also attached to this portgroup (DC1_VMTx) is a nested ESXi 6 host. This vESXi host has a E1000e nic assigned to the DC1_VMTx Portgroup on my baremental ESXi host

- Inside of my nested vESXi host I have a virtual standard switch configured with a portgroup called "VMTx_VLAN50"

- The only active uplink assigned to this portgroup is vmnic 2 which is the E10003 nic assigned to portgroup DC1_VMTx on my bare metal ESXi host

- This portgroup is configured for VLAN 50 and promiscuous mode set to active

- I have a VM assigned to this portgroup within the nested ESXi host on the 192.168.50.x/24 range which CANNNOT ping its DG or SVI.

- When trunk mode is disabled and Gi0/5 is a standard access port with vlan 50 assigned and with no vlan tagging on either the BM or Nested portgroups, the VM inside the nested vESXi host can hit the DG/SVI and L3 switch no problem. This issue occurs when I change the config on the physical switch to trunk vlan 50 & 40 and set the portgroups to tag the relevant vlans.....albeit the management VM on the baremetal ESXi host works fine.

From my perspective the issue appears to be occurring between the bare metal vSwitch and the nested vSwitch, but I am at a total loss as to what it could be. I hope I have explained the situation sufficiently. I have attached some screenshots to try and illustrate my issue a bit better.

Any help would be greatly appreciated.

Thanks,

Jock

Reply
0 Kudos
0 Replies