Skip navigation
2018

Dear readers

this is the second blog of a series related to NSX-T. This second coverage provide you relevant information required to better understand the implication of a centralized services in NSX-T. In the first blog where I have provided you an introduction of the lab setup, this second blog will now discuss the impact when you add a Tier-1 Edge Firewall for the tenant BLUE. The diagram below shows the logical representation of the lab setup with the Edge Firewall attached to the Tier-1 uplink interface of the Logical Router for tenant BLUE.

Blog-Diagram-2.1.png

 

For this blog I have selected to add an Edge Firewall for a Tier-1 Logical Router, but I could have also selected a Load Balancer, VPN service or NAT service. The implication to the "internal" NSX-T networking are similar. However, please keep in mind, not all NSX-T centralized services are supported at the Tier-1 level (as example VPN) or at Tier-0 (as example Load Balancer) with NSX-T 2.3 and not all services (as example DHCP or Metadata Proxy) will instantiate a Service Router.

 

Before I move forward and try to explain what is happen under the hood when you enable an Edge Firewall, I would like to update you with some additional information to the diagram below.

Blog-Diagram-2.2.png

I am sure you are already familiar with this diagram above, as we have talked about the same in my first blog. Each of the four Transport Nodes (TN) has two tenants Tier-1 Logical Routers instantiated. Inside of each Transport Node there are two Logical Switches with VNI 17295 and 17296 between the Tier-1 tenant DR and Tier-0 DR used. These two automatically instantiated (sometimes referred as auto-plumbing) transit overlay Logical Switches have got subnets 100.64.144.18/31 and 100.64.144.20/31 automatic assigned. Internal filtering avoids duplicate IP address challenges;  in the same way as NSX-T is doing already for gateways IP (.254) the Logical Switches 17289 and 17294 where the VMs are attached. Each of this Tier-1 to Tier-0 transit Logical Switch (17295 and 17296) could be showed as linked together in the diagram, but as internal filtering takes place, this is for the moment irrelevant.

The intra Tier-0 Logical Switch with the VNI 17292 is used to forward traffic between the Tier-0 DRs and towards northbound via the Service Router (SR). This Logical Switch 17292 has again an automatic assigned IP subnet (169.254.0.0/28). Each Tier-0 DR has assigned the same IP address (.1), but the two Service Routers use different IPs (.2 and .3), otherwise the Tier-0 DR would not be able to forward based on equal cost with two different next hops.

 

Before the network administrator is able to configure an Edge Firewall for tenant BLUE at the Tier-1 level, he has to assign and edge-cluster to the Tier-1 Logical Router along the edge-cluster members. This is shown in the diagram below.

Blog2-add-edge-nodes-to-Tier1-BLUE.png

Please be aware, as soon as you assign an edge-cluster to a Tier-1 Logical Router a Service Router is automatically instantiated, independent of the Edge Firewall.

 

These two new Service Routers are running on the edge-nodes and they are in active/standby mode. Please see in the next diagram below.

Blog2-routing-tier1-blue-overview.png

 

The configuration of the tenant BLUE Edge Firewall itself is shown in the next diagram. Here we use for this lab the default firewall policy.

Blog2-enable-edge-firewall.png

This simple configuration step with adding the two edge-nodes to the Tier-1 Logical Router for tenant BLUE will cause that NSX-T "re-organize" the internal auto-plumbing network. To understand what is happening under the hood, I have divided these internal network changes into four steps instead showing only the final result.

 

In step 1, NSX-T will internally disconnect the Tier-0 DR to Tier-1 DR for the BLUE tenant, as the northbound traffic needs to be redirected to the two edge-nodes, where the Tier-1 Service Routers are running. The internal Logical Switch with VNI 17295 is now explicit linked together between the four Transport Nodes (TN).

Blog-Diagram-2.3.png

 

In step 2, NSX-T automatically instantiate on each edge-node a new Service Router at Tier-1 level for the tenant BLUE with an Edge Firewall. The Service Router are active/standby mode. In this example, the Service Router running on the Transport Node EN1-TN is active, where the Service Router running on EN2-TN is standby. The Tier-1 Service Router uplink interface with the IP address 100.64.144.19 is either UP or DOWN.

Blog-Diagram-2.4.png

 

In step 3, NSX-T connects the Tier-1 Service Router and the Distributed Router for the BLUE tenant together. For this connection is a new Logical Switch with VNI 17288 added. Again, the Service Router running on EN1-TN has the active interface with the IP address 169.254.0.2 up and the Service Router on EN2-TN is down. This ensure, that only the active Service Router can forward traffic.

Blog-Diagram-2.5.png

 

In the final step 4, NSX-T extends the Logical Switch with VNI 17288 to the two compute Transport Nodes ESX70A and ESX71A. This extension is required to route traffic from vm1 as example on the local host before the traffic is forwarded to the Edge Transport Nodes. NSX-T adds finally the required static routing between the different Distributed and Service Routers. NSX-T does all these steps under the hood automatically.

Blog-Diagram-2.6.png

 

The next diagram below shows a traffic flow between vm1 and vm3. The traffic sourced from vm1 will first hit the local DR in the BLUE tenant on ESX70A-TN. The traffic now needs to be forwarded to the active Tier-1 Service Router (SR) with the Edge Firewall running on Edge Transport Node EN1-TN. The traffic reach then the Tier-0 DR on EN1-TN and then is the traffic forwarded to the RED Tier-1 DR and finally arrives at vm3. The return traffic will hit first the local DR in the RED tenant on ESX71A-TN before the traffic reach the Tier-0 DR on the same host. The next hop is the BLUE Tier-1 Service Router (SR). The Edge Firewall inspects the return traffic and forwards the traffic locally the BLUE Tier-1 DR before finally the traffic arrives back at vm1. The majority of the traffic is handled locally on the EN1-TN. The used bandwidth between the physically hosts and therefore the GENEVE encapsulated traffic is the same as without the Edge Firewall. But as everybody could imagine an edge-node which might hosts multiple Edge Firewalls for multiple tenants or any other centralized services should be designed accordingly.

Blog-Diagram-2.7.png

 

Hope you had a little bit fun reading these two blogs. Feel free to share this blog!

 

Lab Software Details:

NSX-T: 2.3.0.0

vSphere: 6.5.0 Update 1 Build 5969303

vCenter:  6.5 Update 1d Build 2143838

 

Version 1.0 - 10.12.2018