All internal interfaces on a NSX DLR appliance are connected to a "None" standard switch instead of the logical switches assigned! I have never faced this before using NSX 6.4.0 Build 7564187 and ESXi 6.5.0, 5969303.
Anyone else noticed this type of issue!?
The screenshot shows the vNIC interfaces of the DLR Control VM, not the DLR Instances itself. DLR Control VM as the name implies, is only used for establishing dynamic routing memberships with the Edge VMs through the Uplink Interface on the Transit VXLAN Even it is possible not to deploy a DLR Control VM and still have network connectivity if only static routes are to be used. In order to establish routing adjacency with the Edge, DLR Control VM needs to use only one Vnic interface. (and possibly a second if HA is used for the heartbeat keepalives between DLR0 and DLR-1). Since the DLR Control VM is an Edge Appliance deployed, it already comes with 10 vNIC interfaces. This is why only 2 interfaces are used, and the remaining 8 are not connected.
The LIF Interfaces that is shown connected to the Logical switches are not "real" NICs, but pseudonics, so they are not listed on that view. The DLR Control VM is not in the Datapath, so the LIFs are the interfaces on the DLR Kernel Module instances residing on the ESX hosts, which are on the Data Path. DLR instances on Each hosts LIF interfaces have exactly same IP and Mac address. LIF Interface view of the DLR Control VM Edge Appliance is as below: (Only vNIC vNIC_2 is used, other Interfaces are VDR interfaces that no traffic passes on the Control VM)
DLR Control VM has LIFs and routing/forwarding tables. The major output of DLR Control VM’s lifecycle is the DLR routing table, which is a product of Interfaces and Routes.
edge-1-0> show ip route Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived, C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2, IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2 Total number of routes: 5
S 0.0.0.0/0 [1/1] via 192.168.10.1 C 172.16.10.0/24 [0/0] via 172.16.10.1 C 172.16.20.0/24 [0/0] via 172.16.20.1 C 172.16.30.0/24 [0/0] via 172.16.30.1
C 192.168.10.0/29 [0/0] via 192.168.10.2 edge-1-0> show ip forwarding Codes: C - connected, R - remote, > - selected route, * - FIB route R>* 0.0.0.0/0 via 192.168.10.1, vNic_2
C>* 172.16.10.0/24 is directly connected, VDR
C>* 172.16.20.0/24 is directly connected, VDR
C>* 172.16.30.0/24 is directly connected, VDR
C>* 192.168.10.0/29 is directly connected, vNic_2
For the Edge ESG Router instead of DLR Control VM, it is possible to be conntected to 10 Interfaces, the same view would list all 10 vNICs as connected. (One possible way to increase is to use trunk interface) There is no LIF construct for Edge since all the NICs are in Data Path, there is no distributed routing for the Edge.
http://blog.bertello.org/2015/02/nsx-for-newbies-part-7-nsx-edge-gateway/
We can have up to 9 tenants connected to the same NSX Edge. Why 9? Because NSX Edge supports a maximum of 10 vNICs and 1 is required for uplink connectivity. In this scenario you cannot overlap subnets across the 9 tenants unless they are not advertised to the NSX Edge.
Could this be the DLR Control VM, which has 10 vnic interfaces? They don’t pass traffic, so are not aasigned port groip. The Lif interfaces are connected to Vxlan Logical switches which is the default gateway of the LS Vms.
The DLR 10 nic interfaces are shown as DLR Summary page:
NSX for Newbies – Part 6: Distributed Logical Router (dLR) | blog.bertello.org
Hi Canero,
We are facing the issue as described in screen shot 6.1 as explained in the blog statement below.
"
In my environment I’m running NSX 6.1.1 and although I can see all the LIFs ip addresses assigned to the Control VM the Network adapters aren’t actually listed.
I’m not sure if this is an expected behaviour of 6.1 or a bug but I clearly remember that on 6.0 all the network adapters were listed.
"
In screenshot 6.1 Network adapter 2 shows as connected to "none", for us this is the same, but when you view the same interface from the NSX Manager view --> DLR --> Manage,Settings --> Interfaces, the vNIC shows as connected to the "CORRECT" logical switch we assigned it to.
If I now view Logical Switches and select the logical switch the DLR should be connected to, and Select "Virtual Machines", this shows the DLR is NOT connected to that logical switch.
There is our problem, the DLR says it is connected to a logical switch, but it is NOT, but rather it is connected to a "none" port group as shown in screen shot 6 of the blog link you supplied.
Thanks,
Floki
The screenshot shows the vNIC interfaces of the DLR Control VM, not the DLR Instances itself. DLR Control VM as the name implies, is only used for establishing dynamic routing memberships with the Edge VMs through the Uplink Interface on the Transit VXLAN Even it is possible not to deploy a DLR Control VM and still have network connectivity if only static routes are to be used. In order to establish routing adjacency with the Edge, DLR Control VM needs to use only one Vnic interface. (and possibly a second if HA is used for the heartbeat keepalives between DLR0 and DLR-1). Since the DLR Control VM is an Edge Appliance deployed, it already comes with 10 vNIC interfaces. This is why only 2 interfaces are used, and the remaining 8 are not connected.
The LIF Interfaces that is shown connected to the Logical switches are not "real" NICs, but pseudonics, so they are not listed on that view. The DLR Control VM is not in the Datapath, so the LIFs are the interfaces on the DLR Kernel Module instances residing on the ESX hosts, which are on the Data Path. DLR instances on Each hosts LIF interfaces have exactly same IP and Mac address. LIF Interface view of the DLR Control VM Edge Appliance is as below: (Only vNIC vNIC_2 is used, other Interfaces are VDR interfaces that no traffic passes on the Control VM)
DLR Control VM has LIFs and routing/forwarding tables. The major output of DLR Control VM’s lifecycle is the DLR routing table, which is a product of Interfaces and Routes.
edge-1-0> show ip route Codes: O - OSPF derived, i - IS-IS derived, B - BGP derived, C - connected, S - static, L1 - IS-IS level-1, L2 - IS-IS level-2, IA - OSPF inter area, E1 - OSPF external type 1, E2 - OSPF external type 2 Total number of routes: 5
S 0.0.0.0/0 [1/1] via 192.168.10.1 C 172.16.10.0/24 [0/0] via 172.16.10.1 C 172.16.20.0/24 [0/0] via 172.16.20.1 C 172.16.30.0/24 [0/0] via 172.16.30.1
C 192.168.10.0/29 [0/0] via 192.168.10.2 edge-1-0> show ip forwarding Codes: C - connected, R - remote, > - selected route, * - FIB route R>* 0.0.0.0/0 via 192.168.10.1, vNic_2
C>* 172.16.10.0/24 is directly connected, VDR
C>* 172.16.20.0/24 is directly connected, VDR
C>* 172.16.30.0/24 is directly connected, VDR
C>* 192.168.10.0/29 is directly connected, vNic_2
For the Edge ESG Router instead of DLR Control VM, it is possible to be conntected to 10 Interfaces, the same view would list all 10 vNICs as connected. (One possible way to increase is to use trunk interface) There is no LIF construct for Edge since all the NICs are in Data Path, there is no distributed routing for the Edge.
http://blog.bertello.org/2015/02/nsx-for-newbies-part-7-nsx-edge-gateway/
We can have up to 9 tenants connected to the same NSX Edge. Why 9? Because NSX Edge supports a maximum of 10 vNICs and 1 is required for uplink connectivity. In this scenario you cannot overlap subnets across the 9 tenants unless they are not advertised to the NSX Edge.
Hi Canero,
Thanks for your response, the problem is this, when we assign a LIF to a Logical Switch, within the view in NSX it states it is connected to that logical switch, but when I go to the logical switch and check connected VM's it cannot be found there.
When I go to the "none" portgroup, I can see the DLR as connected here instead. Please NOTE: This is fine for the other interfaces which are NOT connected to anything yet, they should be on the none portgroup, as that portgroup is a place holder for none connected interfaces (As explained by VMware GSS) this is a "ghost portgroup".
I need the DLR to have an interface on a specific Logical Switch (Transit-SW), when I add it to this logical switch it states it is connected, but cannot contact/ping another VM on this same logical switch (Transit-SW). When I go to Transit-SW, I can only see a single VM I added and should see the DLR and the VM. As a result the VM cannot contact the DLR which should be it's gateway.
Thanks,
Floki