VMware Networking Community
cnrz
Expert
Expert

DLR Uplink - Internal LIF Interface What is the Difference?

Testing dynamic routing protocols on DLR. DLR Control VM is used for Dynamic routing on behalf of the vdr VIB on ESXi hosts, acting as a control plane. Data traffic does not pass through Control VM, even it is possible to use DLR without a Control VM if only static routing is used. Is it possible to clarify the differences between Uplink and Internal type Interfaces?

In most  Configuration examples Uplink interfaces are used for Connection between ESG and DLR through a Transit Logical Switch, and Internal LIFs are for Connection between VMs ın  Web, App, DB Tiers.  From the DLR Perspective what makes these 2 interface types different? For example is it possible to use dynamic routing protocols on Internal Interfaces?(Since DLR can have Vlan based LIFs as well, this may be needed for some solutions).

DLR LIF  Interfaces can be of type Internal or Uplink (External) as Installation and Upgrade Guide:

https://pubs.vmware.com/NSX-6/index.jsp?topic=%2Fcom.vmware.nsx.install.doc%2FGUID-23FD0828-066A-49...

"You can configure up to 999 interfaces, with a maximum of 8 uplinks."  For almost all use cases single uplink interface is used, what other use cases that may require more than one uplink?

In the Troubleshooting NSX document  it is mentioned that Uplink Interfaces are configured on the DLR Control VM, but also the Internal LIF interfaces are also observed on  show interface output as pseudo interfaces. So if no control VM is used like static routing, how can the hosts have uplink interfaces?

The DLR Control VM’s interfaces can be displayed as follows:

edge-1-0> show interface

Interface VDR is up, line protocol is up

index 2 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,NOARP>

HWaddr: be:3d:a1:52:90:f4

inet6 fe80::bc3d:a1ff:fe52:90f4/64

inet 172.16.10.1/24

inet 172.16.20.1/24

inet 172.16.30.1/24

proxy_arp: disabled

Auto-duplex (Full), Auto-speed (2460Mb/s)

input packets 0, bytes 0, dropped 0, multicast packets 0

input errors 0, length 0, overrun 0, CRC 0, frame 0, fifo 0, missed 0

output packets 0, bytes 0, dropped 0

output errors 0, aborted 0, carrier 0, fifo 0, heartbeat 0, window 0

collisions 0

Interface vNic_0 is up, line protocol is up

index 3 metric 1 mtu 1500 <UP,BROADCAST,RUNNING,MULTICAST>

HWaddr: 00:50:56:8e:1c:fb

inet6 fe80::250:56ff:fe8e:1cfb/64

inet 169.254.1.1/30

0 Kudos
5 Replies
yantothen
Enthusiast
Enthusiast

My 2 cents worth:

The hosts can have Uplink LIF even without DLR Control VM deployed.

NSX Manager is the one that sends the created LIFs information to NSX Controller, which in turn distributes the LIF information to ESXi hosts.

In fact, if we are just using static routing, we can just use Uplink LIFs to connect the Web/App/DB tiers and ESG (and without DLR Control VM deployed).

But, as you know, we can only create up to 8 of them.


Then, comes the Internal LIF to the rescue.

I believe there is a subtle difference between Internal and Uplink LIF on the perspective of ESXi hosts (or DLR kernel module).


The difference is more for DLR Control VM.

If we need to use dynamic routing for DLR, we will then have to deploy DLR Control VM.

And...the Uplink interface is for marking/flaging the LIF for the NSX Manager to know that the to-be-deployed DLR Control VM will need to have a real vNIC to connect to the segment that the Uplink LIF connects to.

The real vNIC connection is for the DLR Control VM to have a real network connection to the segment in order to establish routing adjacency/peering with other routing entities (e.g., ESG) in that segment.


Whereas..the Internal LIF, to the DLR Control VM, is not a real vNIC. From DLR Control VM's perspective, I believe it's more for it to know its directly connected Internal LIF (IP subnets) and to be able to advertise them via the configured dynamic routing protocol.


Thanks,

yantothen

blog.ipcraft.net

cnrz
Expert
Expert

Hello yantothen,

Thanks for your reply, helpful about  how the DLR Control VM Uplink interface binds to the Transit LS that Edge Internal and DLR Uplink LIFs connect.  I could not understand if we have 4 Uplink LIFs for the DLR, and a single Control VM then which one of these Uplink LIFs is chosen? Do we need 4 Uplink Interfaces for the Control VM also as each DLR has one Control VM? In most of the examples single Uplink inteface to the Transit LS is sufficient, but if for some scenario we need more than 1 Uplink, this may be important.

Also if a VM is deployed and Connected to Internal LS and internal LIF of DLR, is it possible to talk dynamic routing protocol between this VM (a quagga router) and DLR?  For normal VMs Stub Internal LIFs that are directly connected is enough, but for this kind of scenario we may need, but again in this case if a DLR Control VM is to be deployed, it should talk BGP through Internal interface, so I think this may not be possible.

Marking a LIF as Uplink and Internal may have this kind of relevance, that is explicitly 8 Uplinks limit, so what property of the LIF indicates instead of just 1000 LIFs?

Regards,

0 Kudos
yantothen
Enthusiast
Enthusiast

A really interesting point you brought up here.

If we have one DLR (one Control VM) with, say, 2 uplink LIFs (and each has one ESG connected), then the Control VM will also have 2 real vNICs that each connects to each uplink segment.

With static routing, this should work.

The routing on the DLR will then need to be configured with different static routing for each of the uplinks e.g., 10.10.10.0/24 forward to Uplink LIF1 and 10.10.20.0/24 forward to Uplink LIF2.

The problem is with dynamic routing.

With OSPF or BGP configuration, we can only use the address (Forwarding and protocol addresses) from one of the uplink segments.

So there is no way to establish dynamic routing on multiple Uplink LIFs..

That's why all examples in the documentation show using one Uplink LIF even for setup with multiple routing peering, ECMP, etc.

Regards,

yantothen

blog.ipcraft.net

cnrz
Expert
Expert

Then I think this clarifies the routing on the DLR, Uplink LIF type defines Dynamic and Static  Routing to Northbound.  Also even on ECMP multiple Edge dynamic routing, single Uplink interface establishes neighbor with 8 ESGs.  If otherwhise possible for ECMP it would be possible to talk ospf on distinct Uplink interfaces. Internal LIFs used for stub LS Gateway  that VMs exit out of their Logical switches.

For Static Routing:

During addition of a Static Route Interface needs to be specified, then even in the case of no DLR Control-VM, it may be possible to use 2 Uplinks connected to Transit-1, Edge-1 throuhg Uplink-1 and Transit-2 LS, Edge-2 through Uplink-2.

http://www.dasblinkenlichten.com/working-with-vmware-nsx-logical-to-physical-connectivity/

DLR_Static_Route.jpg

For Dynamic Routing:

The protocol address is what the ESG sees as an ospf neighbor. Then if we have 2 Uplinks, and only one uplink is possible we define which of these 2 Uplinks by specifying the IP Address for Protocol address. There is no place to enable Ospf explicitly on the Uplink interface and by this way it is also not possible to enable Ospf on more than a single Uplink Interface.

https://vcdx133.com/2014/10/11/nsx-dlr-and-esg-with-ospf-part-5-configure-ospf/

"n the “Interfaces” window, verify that a single “Uplink” and two “Internal” interfaces exist."

This may also give a hint on why we should enable Dynamic Routing if we need to SSH to the DLR, as the DLR Control VM IP address is specified through Protocol, so SSH to the forwarding IP is not possible as this IP resides on the DLR Kernel Modules on ESXi hosts.

Does this rule applies to BGP as well, as the documentation gives OSPF explicitly and does not mention BG?

(If for BGP more than 1 Uplink is possible again GUI allows single Protocol address, may be through REST Api additional Protocol addresses for BGP enabled interfaces may be possible)

http://www.routetocloud.com/2014/06/nsx-distributed-logical-router/

"The DLR supports both OSPF and BGP on its Uplink Interface, but cannot run both at the same time. OSPF can be enabled only on single Uplink Interface."

0 Kudos
yantothen
Enthusiast
Enthusiast

Yes, it also applies to BGP.

We will get below error when adding a second BGP peering with Fowarding address different from the first (existing) peering..

Second BGP Peer Config Error.png

Regards,

yantothen

blog.ipcraft.net

0 Kudos