NSX Replication Modes and the NSX Controller

After looking at the various replication modes - unicast, multicast and hybrid, I'm wanted to confirm how this affects the use of the NSX Controller.

Can the features and tables of the NSX controller, still come into play - ARP suppression etc. As from the NSX design guide, this doesn't suggest it does, i.e:

"Multicast mode is the process for handling BUM traffic specified by the VXLAN IETF draft and does not leverage any of the enhancements brought by NSX with the introduction of the controller clusters. This behavior does not leverage the decoupling of logical and physical networking as communication in the logical space is predicated on the multicast configuration required in the physical network infrastructure" - https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmw-nsx-network-virtu...

Can anyone point me in the right direction - Thanks.

Tags (1)
0 Kudos
2 Replies
VMware Employee
VMware Employee

Few Key features of NSX controllers are it removes dependency on multicast on physical network and suppress broadcast in vxlan network. For ARP packets there will be a check on hypervisor cache and controller ARP table before broadcast packet goest out and get replicated to multiple host. This feature was available from starting but there was a limitation for DLR , ARP suppression feature was limited to VM’s in earlier versions of NSX , however for recent versions DLR is also leveraging same feature .

ARP suppression has been extended to include the Distributed logical router (DLR) as well.

  • ARP requests from distributed logical router are treated the same way as ARP requests from other VMs and are subjected to suppression. When distributed logical router has to resolve ARP request of a destination IP, the ARP request is suppressed by the logical switch, preventing flooding when the IP to MAC binding is already known to the controller.
  • When a LIF is created, distributed logical router adds the ARP entry for the LIF IP in the logical switch, so ARP requests for the LIF IP are also suppressed by the logical switch


Short answer is if the feature is enabled , we can leverage it irrespective of the replication mode.

Sree | CKA|CKAD|VCIX-3X| VCAP-4X| VExpert 5x
Please KUDO helpful posts and mark the thread as solved if answered
0 Kudos

The choice of the Replication mode for BUM (Broadast, Unknown Unicast and Multicast) traffic may depend on the:

  • Size of the NSX Environment (Number of ESX hosts, Applications depending on Multicast and Broadcast)
  • Physical Network Topology, i.e. is the underlying physical network CLOS (L2, L3 Fabric), 3 Tier Classical L2 with Spanning-Tree, Routers  Supporting PIM, switches supporting IGMP Snooping/Querier?
  • If there is a SP(Service Provider) cloud between ESX hosts, Multicast through the MPLS cloud or Wan should be supported, some SPs support, some don't or may require additional services.

Multicast mode does not need a NSX Controller Cluster for sending BUM traffic to other  ESX hosts. When a VM sends traffic, similar to a classical L2 switch, the Local ESX host of the VM looks at the destination MAC address of the frame packet, and tries to find a match on its local VTEP-MAC Address table. If the destination MAC is matched for unicast packets, it encapsulates the packet to VXLAN header sends it to the remode ESX hosts VTEP address. If the traffic is BUM, it is not possible to match with this table, ESX host creates a Multicast packet with a Multicast address specific to this VXLAN and sends it to Physical network. In this case the other ESX hosts, which also have registered to this same Multicast address, receives this frame and forwards it to the local frame and populates their local VTEP-MAC table. So, this mode inherently includes Arp Suppression as well as reducing Broadcast traffic (as it is single multicast packet on the Physical network), and the CPU of the ESX host.

Disadvantage of the Multicast Mode is the dependency of the NSX Control Plane on the Physical Infrastructure. As it requires PIM, IGMP Snooping/Querier configuration and maintenance, the design of the NSX becomes more complex involving the design of Multicast Protocols, the sizing and scalability of the Multicast tables and types on the Physical Infrastructure. Also troubleshooting a L2 reachability problem comes down to troubleshooting PIM, IGMP which by themselves may depend on OSPF, Bgp and this is not very easy and desirable for many environments. Most of the Support tickets may bounce between different groups inside the company, different vendors and SPs, which prolongs the troubleshooting period and makes it complex and cumbersome. So Multicast mode may be recommended when the other 2 does not meet the scalability and sizing requirements and could be more suitable for cloud scale, SP or very large scale type of deployments.

Hybrid Mode does not require L3 multicast from the Physical Network, but instead requires NSX Controllers. From Physical Network only L2 Multicast capabilities IGMP Snooping/Querier which is easier to configure or enabled by default is needed. In this mode, an ESX host receiving BUM traffic sends it to local network as Multicast, remote networks as Unicast packets, which is similar to multicast mode. This reduces the BUM traffic on the Physical network with Arp Suppression, Single Broadcast Packet instead of many broadcast packets to every ESX host. The role of Controllers in Hybrid mode is to sync MTEP Tables across the ESX hosts on the Cluster. By this way, VTEPs that are not on the same IP subnets are possible to  receive the Packets sent from VTEPs sent from other Subnets, which is important for L3 CLOS Fabrics where the VTEP IP addresses may be seperated across different racks.

Unicast mode does not require any configuration from the Physical Network side, providing complete decoupling of the NSX Vxlan Control Plane from the underlying Physical network. Only Controllers are queried for Unknown Unicast packets instead of Multicast packets for Arp resolution. Since Controllers maintain VTEP, MAC and Arp tables for every ESX and VM, they function as ARP Proxy, they reply to the  local ESX host with the Mac address of the remote VM, thus ARP Suppression is achieved. If no controllers are available (It is a Cluster of 3 Nodes which is distributed across 3 different ESX hosts which is a very small possibility) then Arp resolution is not possible. In this mode Broadcast and Multicast traffic is sent only to the ESX hosts that have members in that VXLAN. For example if there are 10 ESX hosts and only 4 of them has VMs with VNI 5000, the number of Vxlan Converted packets is 4, not 10.

From Simplicity and Ease of TroubleShooting point of view it could be recommended as Unicast>Hybrid>Multicast, so if Unicast Mode meets the size of the ESX Cluster VNI It may be preferred. Unicast mode could be sufficient in most deployments.

From Scalability Perspective Multicast>Hybrid>Unicast, so if the environment is so large that Unicast mode does not meet this sizing requirement, Hybrid Mode could be preferred as an Optimal Choice in between. Multicast mode is generally recommended for Very large scale environment that even Hybrid Mode is not sufficient,  or Migration from old Vxlan Deployments.

These links may be helpful:






               Totally decoupling physical from virtual.

              Used with small to medium sized implementations.

  1. No hardware complex configurations involved.
  2. Requires an NSX controller cluster.
  3. Scales via L3 which also enhances performance as well (each rack/cluster has its own VTEP subnet).
  4. Having VXLAN offloading on the NIC aligns well here.
  5. Multicast:

               No decoupling of physical from virtual.

               No scale set for this mode to be honest.

               Requires an extra complex configuration at the physical layer.

               Requires an extra configuration on NSX (IP Multicast Range).

  1. Hybrid:

               Combining both virtual and physical at the same time.

               Used with large scale sized implementations.

  1. Requires an NSX controller cluster.
  2. Requires an extra configuration at the physical layer.
  3. Scales well in a spine/leaf topology.


The control plane decouples NSX for vSphere from the physical network and handles the broadcast, unknown unicast, and multicast (BUM) traffic within the logical switches. The control plane is on top of the transport zone and is inherited by all logical switches that are created within it. It is possible to override aspects of the control plane.

The following options are available.

Multicast Mode

The control plane uses multicast IP addresses on the physical network. Use multicast mode only when upgrading from existing VXLAN deployments. In this mode, you must configure PIM/IGMP on the physical network.

Unicast Mode

The control plane is handled by the NSX Controllers and all replication occurs locally on the host. This mode does not require multicast IP addresses or physical network configuration.

Hybrid Mode

This mode is an optimized version of the unicast mode where local traffic replication for the subnet is offloaded to the physical network. Hybrid mode requires IGMP snooping on the first-hop switch and access to an IGMP querier in each VTEP subnet. Hybrid mode does not require PIM.

which command to show UTEP/MTEP for the subnet segment

Two Easy Questions about Unicast and Hybrid Mode

NSX VXLAN: Multicast and/or MP-BGP support ?

0 Kudos