This is my study note grabbed from many sources (cut and paste) like official documentation, blog series and all internet available material. This document doesn't replace the official documentation and isn't referred to as the method safe and foolproof for passing certification exams.
To preserve the intellectual property of bloggers and contributors will be given web references of the authors at the bottom of the document.
Any comments and contributions are appreciated.
NSX network virtualization programmatically creates, snapshots, deletes, and restores software-based virtual networks. This transform the approach to networking for imporve agility, reduce operational cost and increase the security. NSX is a completely non-disruptive solution: the physical network infrastructure you already have is all you need to deploy a software-defined data center.
With network virtualization, the functional equivalent of a network hypervisor reproduces the complete set of Layer 2 through Layer 7 networking services (for example, switching, routing, access control, firewalling, QoS, and load balancing) in software. As a result, these services can be programmatically assembled in any arbitrary combination, to produce unique, isolated virtual networks in a matter of seconds.
Unlike legacy architectures, virtual networks can be provisioned, changed, stored, deleted, and restored programmatically without reconfiguring the underlying physical hardware or topology.
Consists of the NSX vSwitch, which is based on the vSphere Distributed Switch (VDS) with additional components to enable services:
- distributed routing
- logical firewall
runs in the NSX Controller cluster which is an advanced distributed state management system that provides control plane functions for NSX logical switching and routing functions.
It maintains informations about:
- logical switches (VXLAN)
- distributed logical router (DLR)
It manages at hypervisor level:
- Distribute switching
It must be resilient: must be deployed in a three-node cluster. The three virtual appliances provide, maintain, and update the state of all network functioning within the NSX domain (3 nodes to avid split-brain scenario)
A controller cluster has several roles, including:
- API provider
- Persistence server
- Switch manager
- Logical manager
- Directory server
Is built by the NSX Manager, the centralized network management component of NSX; it also provides the single point of configuration and REST API entry-points.
Is installed as a virtual appliance on any ESX host has a one-to-one relationship with vCenter (even in a cross-vCenter NSX environment)
Typically end users tie network virtualization to their cloud management platform for deploying applications. NSX provides rich integration into virtually any CMP through REST APIs. Out-of-the-box integration is also available through:
- VMware vCloud Automation Center
- vCloud Director
- OpenStack with the Neutron plug-in for NSX.
could be :
- NSX Edge as an edge services gateway (ESG): gives you access to all NSX Edge services such as firewall, NAT, DHCP, VPN, load balancing, and high availability and can install multiple ESG virtual appliances in a datacenter. It has a total of 10 uplinks and with a trunk could have up to 200 subinterface.
- distributed logical router (DLR): provides East-West distributed routing with tenant IP address space and data path isolation. It can have eight uplink interfaces and up to a thousand internal interfaces. It has 2 main components:
- The DLR control plane (virtual appliance called also control VM): supports dynamic routing protocols (BGP and OSPF), exchanges routing updates with the next Layer 3 hop device.
- DLR kernel modules (VIBs) that are installed on the ESXi hosts that are part of the NSX domain similar to the line cards in a modular chassis supporting Layer 3 routing. Have a routing information base (RIB)
Logical Routing Mechanism
- NSX Manager UI (or with API calls): creates a DLR istance and routing is enabled, leveraging either OSPF or BGP.
- The NSX Controller: leverages the control plane with the ESXi hosts to push the new DLR configuration including LIFs and their associated IP and vMAC addresses.
- Assuming a routing protocol is also enabled on the next-hop device (an NSX Edge [ESG] in this example), OSPF or BGP peering is established between the ESG and the DLR control VM. The ESG and the DLR can then exchange routing information:
- The DLR control VM can be configured to redistribute into OSPF the IP prefixes for all the connected logical networks (172.16.10.0/24 and 172.16.20.0/24 in this example). As a consequence, it then pushes those route advertisements to the NSX Edge. Notice that the next hop for those prefixes is not the IP address assigned to the control VM (192.168.10.3) but the IP address identifying the data-plane component of the DLR (192.168.10.2). The former is called the DLR "protocol address," whereas the latter is the "forwarding address."
- The NSX Edge pushes to the control VM the prefixes to reach IP networks in the external network. In most scenarios, a single default route is likely to be sent by the NSX Edge, because it represents the single point of exit toward the physical network infrastructure.
- The DLR control VM pushes the IP routes learned from the NSX Edge to the controller cluster.
- The controller cluster is responsible for distributing routes learned from the DLR control VM to the hypervisors. Each controller node in the cluster takes responsibility of distributing the information for a particular logical router instance. In a deployment where there are multiple logical router instances deployed, the load is distributed across the controller nodes. A separate logical router instance is usually associated with each deployed tenant.
- The DLR routing kernel modules on the hosts handle the data-path traffic for communication to the external network by way of the NSX Edge.
Is a distributed port group on a distributed switch it gets a unique VNI (VXLAN Network Identifier) to overlays the L2 network. A logical switch is distributed and can span across all hosts in vCenter (or across all hosts in a cross-vCenter NSX environment). This allows for virtual machine mobility (vMotion) within the data center without limitations of the physical Layer 2 (VLAN) boundary. The physical infrastructure is not constrained by MAC/FIB table limits, because the logical switch contains the broadcast domain in software.
Provides NS and EW routing connections without the use of physical appliance to allow VM to VM communication and VM communications in and from public (or physical) world.
provides security mechanisms for dynamic virtual data centers: allows you to segment virtual datacenter entities like virtual machines based on VM names and attributes, user identity, vCenter objects like datacenters, and hosts, as well as traditional networking attributes like IP addresses, VLANs, and so on.
Logical Virtual Private Networks (VPNs)
SSL VPN-Plus allows remote users to access private corporate applications. IPsec VPN offers site-to-site connectivity between an NSX Edge instance and remote sites with NSX or with hardware routers/VPN gateways from 3rd-party vendors. L2 VPN allows you to extend your datacenter by allowing virtual machines to retain network connectivity while retaining the same IP address across geographical boundaries.
Logical Load Balancer
The NSX Edge load balancer distributes client connections directed at a single virtual IP address (VIP) across multiple destinations configured as members of a load balancing pool
Helps to provision and assign network and security services to applications in a virtual infrastructure: it maps these services to a security group, and the services are applied to the virtual machines in the security group using a Security Policy.
Integration with 3rd-party solutions