VMware Networking Community
Floki00
Enthusiast
Enthusiast
Jump to solution

NSX DLR internal interfaces connected to "none" standard switch

All internal interfaces on a NSX DLR appliance are connected to a "None" standard switch instead of the logical switches assigned! I have never faced this before using NSX 6.4.0 Build 7564187 and ESXi 6.5.0, 5969303.

Anyone else noticed this type of issue!?

Reply
0 Kudos
1 Solution

Accepted Solutions
cnrz
Expert
Expert
Jump to solution

The screenshot shows the vNIC interfaces of the DLR Control VM, not the DLR Instances itself. DLR Control VM as the name implies, is only used for establishing dynamic routing memberships with the Edge VMs through the Uplink Interface on the Transit VXLAN Even it is possible not to deploy a DLR Control VM and still have network connectivity if only static routes are to be used. In order to establish routing adjacency with the Edge, DLR Control VM needs to use only one Vnic interface. (and possibly a second if HA is used for the heartbeat keepalives between DLR0 and DLR-1).  Since the DLR Control VM is an Edge Appliance deployed, it already comes with 10 vNIC interfaces. This is why only 2 interfaces are used, and the remaining 8 are not connected.

VM_Nic_10_Adapters.png

The LIF Interfaces that is shown connected to the Logical switches are not "real" NICs, but pseudonics, so they are not listed on that view. The DLR Control VM is not in the Datapath, so the LIFs are the interfaces on the DLR Kernel Module instances residing on the ESX hosts, which are on the Data Path. DLR instances on Each hosts LIF interfaces have exactly same IP and Mac address.  LIF Interface view of the DLR Control VM Edge Appliance is as below: (Only vNIC vNIC_2 is used, other Interfaces are VDR interfaces that no traffic passes on the Control VM)

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/com.vmware.nsx.troubleshooting.doc/GUID-6801A9...

NSX Routing Control Plane CLI

DLR Control VM

DLR Control VM has LIFs and routing/forwarding tables. The major output of DLR Control VM’s lifecycle is the DLR routing table, which is a product of Interfaces and Routes.

edge-1-0> show ip route  Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived, C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2, IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2  Total number of routes: 5
S       0.0.0.0/0            [1/1]         via 192.168.10.1 C       172.16.10.0/24       [0/0]         via 172.16.10.1 C       172.16.20.0/24       [0/0]         via 172.16.20.1 C       172.16.30.0/24       [0/0]         via 172.16.30.1
C       192.168.10.0/29      [0/0]         via 192.168.10.2  edge-1-0> show ip forwarding Codes: C - connected, R - remote, > - selected route, * - FIB route R>* 0.0.0.0/0 via 192.168.10.1, vNic_2
C>* 172.16.10.0/24 is directly connected, VDR
C>* 172.16.20.0/24 is directly connected, VDR
C>* 172.16.30.0/24 is directly connected, VDR
C>* 192.168.10.0/29 is directly connected, vNic_2

  • The purpose of the Forwarding Table is to show which DLR interface is chosen as the egress for a given destination subnet.
    • The “VDR” interface is displayed for all LIFs of “Internal” type. The “VDR” interface is a pseudo-interface that does not correspond to a vNIC.

  • Interface vNic_0 in this example is the HA interface.
    • The output above was taken from a DLR deployed with HA enabled, and the HA interface is assigned an IP address. This appears as two IP addresses, 169.254.1.1/30 (auto-assigned for HA), and 10.10.10.1/24, manually assigned to the HA interface.
    • On an ESG, the operator can manually assign one of its vNICs as HA, or leave it as default for the system to choose automatically from available “Internal” interfaces. Having the “Internal” type is a requirement, or HA will fail.
  • Interface vNic_2 is an Uplink type; therefore, it is represented as a “real” vNIC.
    • Note that the IP address seen on this interface is the same as the DLR’s LIF; however, the DLR Control VM will not answer ARP queries for the LIF IP address (in this case, 192.168.10.2/29). There is an ARP filter applied for this vNIC’s MAC address that makes it happen.
    • The point above holds true until a dynamic routing protocol is configured on the DLR, when the IP address will be removed along with the ARP filter and replaced with the “Protocol IP” address specified during the dynamic routing protocol configuration.
    • This vNIC is used by the dynamic routing protocol running on the DLR Control VM to communicate with the other routers to advertise and learn routes.

For the Edge ESG Router instead of DLR Control VM,  it is possible to be conntected to 10 Interfaces, the same view would list all 10 vNICs as connected. (One possible way to increase is to use trunk interface)  There is no LIF construct for Edge since all the NICs are in Data Path, there is no distributed routing for the Edge.

http://blog.bertello.org/2015/02/nsx-for-newbies-part-7-nsx-edge-gateway/

We can have up to 9 tenants connected to the same NSX Edge. Why 9? Because NSX Edge supports a maximum of 10 vNICs and 1 is required for uplink connectivity. In this scenario you cannot overlap subnets across the 9 tenants unless they are not advertised to the NSX Edge.

Edge_Maximum_10_Nicspng.png

View solution in original post

Reply
0 Kudos
4 Replies
cnrz
Expert
Expert
Jump to solution

Could this be the DLR Control VM, which has 10 vnic interfaces? They don’t pass traffic, so are not aasigned port groip. The Lif interfaces are connected to Vxlan Logical switches which is the default gateway of the LS Vms.

The DLR 10 nic interfaces are shown as DLR Summary page:

NSX for Newbies – Part 6: Distributed Logical Router (dLR) | blog.bertello.org

Reply
0 Kudos
Floki00
Enthusiast
Enthusiast
Jump to solution

Hi Canero,

We are facing the issue as described in screen shot 6.1 as explained in the blog statement below.

"

In my environment I’m running NSX 6.1.1 and although I can see all the LIFs ip addresses assigned to the Control VM the Network adapters aren’t actually listed.

I’m not sure if this is an expected behaviour of 6.1 or a bug but I clearly remember that on 6.0 all the network adapters were listed.

"

In screenshot 6.1 Network adapter 2 shows as connected to "none", for us this is the same, but when you view the same interface from the NSX Manager view --> DLR --> Manage,Settings --> Interfaces, the vNIC shows as connected to the "CORRECT" logical switch we assigned it to.

If I now view Logical Switches and select the logical switch the DLR should be connected to, and Select "Virtual Machines", this shows the DLR is NOT connected to that logical switch.

There is our problem, the DLR says it is connected to a logical switch, but it is NOT, but rather it is connected to a "none" port group as shown in screen shot 6 of the blog link you supplied.

Thanks,

Floki

Reply
0 Kudos
cnrz
Expert
Expert
Jump to solution

The screenshot shows the vNIC interfaces of the DLR Control VM, not the DLR Instances itself. DLR Control VM as the name implies, is only used for establishing dynamic routing memberships with the Edge VMs through the Uplink Interface on the Transit VXLAN Even it is possible not to deploy a DLR Control VM and still have network connectivity if only static routes are to be used. In order to establish routing adjacency with the Edge, DLR Control VM needs to use only one Vnic interface. (and possibly a second if HA is used for the heartbeat keepalives between DLR0 and DLR-1).  Since the DLR Control VM is an Edge Appliance deployed, it already comes with 10 vNIC interfaces. This is why only 2 interfaces are used, and the remaining 8 are not connected.

VM_Nic_10_Adapters.png

The LIF Interfaces that is shown connected to the Logical switches are not "real" NICs, but pseudonics, so they are not listed on that view. The DLR Control VM is not in the Datapath, so the LIFs are the interfaces on the DLR Kernel Module instances residing on the ESX hosts, which are on the Data Path. DLR instances on Each hosts LIF interfaces have exactly same IP and Mac address.  LIF Interface view of the DLR Control VM Edge Appliance is as below: (Only vNIC vNIC_2 is used, other Interfaces are VDR interfaces that no traffic passes on the Control VM)

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/com.vmware.nsx.troubleshooting.doc/GUID-6801A9...

NSX Routing Control Plane CLI

DLR Control VM

DLR Control VM has LIFs and routing/forwarding tables. The major output of DLR Control VM’s lifecycle is the DLR routing table, which is a product of Interfaces and Routes.

edge-1-0> show ip route  Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived, C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2, IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2  Total number of routes: 5
S       0.0.0.0/0            [1/1]         via 192.168.10.1 C       172.16.10.0/24       [0/0]         via 172.16.10.1 C       172.16.20.0/24       [0/0]         via 172.16.20.1 C       172.16.30.0/24       [0/0]         via 172.16.30.1
C       192.168.10.0/29      [0/0]         via 192.168.10.2  edge-1-0> show ip forwarding Codes: C - connected, R - remote, > - selected route, * - FIB route R>* 0.0.0.0/0 via 192.168.10.1, vNic_2
C>* 172.16.10.0/24 is directly connected, VDR
C>* 172.16.20.0/24 is directly connected, VDR
C>* 172.16.30.0/24 is directly connected, VDR
C>* 192.168.10.0/29 is directly connected, vNic_2

  • The purpose of the Forwarding Table is to show which DLR interface is chosen as the egress for a given destination subnet.
    • The “VDR” interface is displayed for all LIFs of “Internal” type. The “VDR” interface is a pseudo-interface that does not correspond to a vNIC.

  • Interface vNic_0 in this example is the HA interface.
    • The output above was taken from a DLR deployed with HA enabled, and the HA interface is assigned an IP address. This appears as two IP addresses, 169.254.1.1/30 (auto-assigned for HA), and 10.10.10.1/24, manually assigned to the HA interface.
    • On an ESG, the operator can manually assign one of its vNICs as HA, or leave it as default for the system to choose automatically from available “Internal” interfaces. Having the “Internal” type is a requirement, or HA will fail.
  • Interface vNic_2 is an Uplink type; therefore, it is represented as a “real” vNIC.
    • Note that the IP address seen on this interface is the same as the DLR’s LIF; however, the DLR Control VM will not answer ARP queries for the LIF IP address (in this case, 192.168.10.2/29). There is an ARP filter applied for this vNIC’s MAC address that makes it happen.
    • The point above holds true until a dynamic routing protocol is configured on the DLR, when the IP address will be removed along with the ARP filter and replaced with the “Protocol IP” address specified during the dynamic routing protocol configuration.
    • This vNIC is used by the dynamic routing protocol running on the DLR Control VM to communicate with the other routers to advertise and learn routes.

For the Edge ESG Router instead of DLR Control VM,  it is possible to be conntected to 10 Interfaces, the same view would list all 10 vNICs as connected. (One possible way to increase is to use trunk interface)  There is no LIF construct for Edge since all the NICs are in Data Path, there is no distributed routing for the Edge.

http://blog.bertello.org/2015/02/nsx-for-newbies-part-7-nsx-edge-gateway/

We can have up to 9 tenants connected to the same NSX Edge. Why 9? Because NSX Edge supports a maximum of 10 vNICs and 1 is required for uplink connectivity. In this scenario you cannot overlap subnets across the 9 tenants unless they are not advertised to the NSX Edge.

Edge_Maximum_10_Nicspng.png

Reply
0 Kudos
Floki00
Enthusiast
Enthusiast
Jump to solution

Hi Canero,

Thanks for your response, the problem is this, when we assign a LIF to a Logical Switch, within the view in NSX it states it is connected to that logical switch, but when I go to the logical switch and check connected VM's it cannot be found there.

When I go to the "none" portgroup, I can see the DLR as connected here instead. Please NOTE: This is fine for the other interfaces which are NOT connected to anything yet, they should be on the none portgroup, as that portgroup is a place holder for none connected interfaces (As explained by VMware GSS) this is a "ghost portgroup".

I need the DLR to have an interface on a specific Logical Switch (Transit-SW), when I add it to this logical switch it states it is connected, but cannot contact/ping another VM on this same logical switch (Transit-SW). When I go to Transit-SW, I can only see a single VM I added and should see the DLR and the VM. As a result the VM cannot contact the DLR which should be it's gateway.

Thanks,

Floki

Reply
0 Kudos