this is the first blog of a series related to NSX-T. This first coverage provide you a simple introduction to the most relevant information required to better understand the implication of a centralized services in NSX-T. A centralized service could be as example a Load Balancer or an Edge Firewall.
NSX-T has the ability to do distributed routing and supports distributed firewall. Distributed routing in the context that each host, which is prepared for NSX-T, can do local routing. From the logical view is this part called Distributed Router (DR). The DR is part of a Logical Router (LR) and this LR can be configured at Tier-0 or at Tier-1 level. Distributed routing is perfect for scale and could reduce the bandwidth utilization of each physical NIC on the host, as the routing decision is done on the local host. As example, when the source and the destination VM is located on the same host, but connected to different IP subnets and therefore attached to different overlay Logical Switches, then the traffic never leaves the host. All traffic forwarding is processed on the host itself instead at the physical network as example on the ToR switch.
Each host which is prepared with NSX-T and attached to a NSX-T Transport Zone is called a Transport Node (TN). Transport Nodes have implicit a N-VDS configured, which provides as example GENEVE Tunnel Endpoint or is responsible for the distributed Firewall processing. However, there are services like load balancing or edge firewalling which is not a distributed service. VMware call these services "centralized services". Centralized services instantiate a Service Router (SR) and this SR runs on the NSX-T edge-node (EN). An edge-node could be a VM or a bare metal server. Each edge-node is also a Transport Node (TN).
Lets have now a look to a simple two tier NSX-T topology with a tenant BLUE and a tenant RED. Both have for new no centralized services at Tier-1 level enabled. For the North-South connectivity to the physical world, there is already a centralized services at the Tier-0 instantiated. However, we don't want focus on this North-South routing part, but as we later would like the understand, what it means to have a centralized services configured on a Tier-1 logical router, it is important to understand this part as well, because North-South routing is also a centralized service. The diagram below shows the logical representation of a simple lab setup. This lab setup will later be used to instantiate the a centralized service at a Tier-1 Logical Router.
For those which like to get a better understanding of the topology, I have included a diagram of the physical view below. In this lab, we actually use 4 ESXi hosts. For simplification we focus in this blog on the Hypervisor ESXi, instead KVM, even we could build a similar lab with KVM too. On each of these two Transport Nodes ESX70A-TN and ESX71A-TN is a VM installed. The two other hosts ESX50A and ESX51A are NOT* prepared for NSX-T, but they host on each a single edge-node (EN1 and EN2) VM. These two edge-nodes don't have to run on two different ESXi hosts, but it is recommended for redundancy reason.
As shown in the next diagram, we combine now the physical and logical view. The two Transport Nodes ESX70A-TN and ESX71A-TN have only DRs at Tier-1 and Tier-0 level instantiated, but no Service Router. That means the Logical Router consists of only a DR. These DRs at Tier-1 level provide the gateway (.254) for the attached Logical Switch. The tenant BLUE uses VNI 17289 and the tenant RED uses VNI 17294. NSX-T assign these VNIs out of a VNI pool (default pool: 5000 - 65535). The edge-nodes VMs, now showed as Edge Transport Node (EN1-TN and EN2-TN) have the same Tier-1 and Tier-0 DRs instantiated, but only the Tier-0 includes a Service Router (SR).
The two Tier-1 Logical Routers respective DRs can only talk to each other via the green Tier-0 DR. But before you are able to attach the two Tier-1 DRs to a Tier-0 DR a Tier-0 Logical Router is required. And a Tier-0 Logical Router mandates the assignment of an edge-cluster during the configuration of the Tier-0 Logical Router. Lets assume at this point, we have already configured two edge-node VMs and these edge-node VMs are assigned to an edge-cluster. A Tier-0 Logical Router consists always of a Distributed Router (DR) and depending on the node type as well with a Service Router. A Service Router is always required for the Tier-0 Logical Router, as the Service Router is responsible for the routing connectivity to the physical world. But the Service Router is only instantiated on the edge-nodes. In this lab both Service Router are configured on two edge-nodes respective as Edge Transport Node in active/active mode to provide ECMP to the physical world.
All the internal transit links, as shown in the diagram below, are automatically configured through NSX-T. The only task for the network administrator is to connected the Tier-0 DR to the Tier-1 DRs.
The northbound connection to the physical world requires further a configuration of an additional (or better two Transport Zones for routing redundancy) VLAN based Transport Zone plus the routing peering (typically eBGP). Below is the resulting logical network topology.
One probably ask, why NSX-T instantiate on each edge-node the two Tier-1 DRs too? Well, this is required for an optimized forwarding. As already mentioned, routing decisions are always done on that hosts where the traffic is sourced. Assume vm1 in tenant BLUE would like to talk to a server in the physical world. Traffic sourced at vm1 is forwarded to its local gateway on the Tier-1 DR and then towards the traffic to the Tier-0 DR on the same host. From the Tier-0 DR is then the traffic forwarded to the left Tier-0 SR on EN1-TN (lets assume, traffic is hashed accordingly) and then the flow reach the external destination. The return traffic reach first Tier-0 SR on EN2-TN (lets assume again based on the hash), then the traffic is forwarded locally to Tier-0 DR on the same Edge Transport Node and then to the Tier-1 DR in tenant BLUE. The traffic never leaves EN2-TN until the traffic reach locally the Logical Switch where the vm1 is attached. This is what is called optimized forwarding which is possible due the distributed NSX-T architecture. The traffic needs to be forwarded only once over the physical data center infrastructure and therefore encapsulated into GENEVE per direction!
For now we close this first blog. For the second blog we will dive into the instantiation of a centralized service at Tier-1. Hope you had a little big fun reading this first write-up.
*Today, NSX-T supports to run edge-node VMs on NSX-T prepared hosts too. This capability is important to combine compute and edge-node services on the same host.
Version 1.0 - 19.11.2018
Version 1.1 - 27.11.2018 (minor changes)
Version 1.2 - 04.12.2018 (cosmetic changes)
Version 1.3 - 10.12.2018 (link for second blog added)