my question is about VXLAN in a VMware vMSC and Cisco catalyst solution across 2 datacenters with ISL between.
if you do not have NSX, what configuration is needed on the ESXi hosts? And distributed switch?
Is it just that the distributed switch can have the IGMP snooping enabled then all other VXLAN configuration is ‘Hardware VXLAN’ on the physical switches?
ESXi doesn't need to know you're using VXLAN. As long as it presents the vLANs at the uplink (trunked) on the VDS, you can create a VDS with port groups that use those specific vLANs. You can set the vLAN on the port group itself.
I hope that helps answer your question?
Not sure about what are you planning to do, because, VXLAN is an overlay L2 over L3.
Now, VXLAN uses VNIs (virtual network identifiers) which are not vlan. So if you want to use overlay inside your DC you need NSX. Also, just as reminder, NSX uses Geneve instead of VXLAN.
If you want to use pass VXLAN over the WAN I will suggest to install an autonomus NSX edge:
Using this possible workaround, you can pass VLAN/VXLAN using a L2 VPN.
Hope this helps.
Hi thanks for the replies.
so what is in place is VXLAN EVPN between 2 DCs.
So to confirm, there are two types of VXLAN implementation.
With the first option above, if there was a suspected VXLAN issue, then it would be on the switches, multicast on switches etc? As there is nothing on ESXi related to config of VXLAN apart from vDS and port groups with VLANs?
If you went for the second option above with VXLAN within the DC, is there a way to do that with VXLAN only! or would you need NSX for this?
"With the first option above, if there was a suspected VXLAN issue, then it would be on the switches, multicast on switches etc? As there is nothing on ESXi related to config of VXLAN apart from vDS and port groups with VLANs?"
Correct, the VXLAN issue would likely be on switches since all it's doing is packaging L2 into L3 to get it across a LAN. There is no special config you need to use on the vDS.
"If you went for the second option above with VXLAN within the DC, is there a way to do that with VXLAN only! or would you need NSX for this?"
I'm not a Cisco guy but I believe you can use Cisco VXLAN between hosts to more easily present the vlans to the hosts. So... yes?
The advantage to using NSX (which uses Geneve protocol these days) is that it can seemingly take the overlay into the hosts and provide cross-site overlay via NSX Federation. What I mean by into the hosts is, by example, you have two networks on the same host with separate VMs trying to talk to each other. Instead of going out of the host to the network's gateway, NSX will have a distributed router on the host that can route the traffic and keep it inside the host. This reduces some traffic on your switches and can help with very chatty applications.
Just to understand your scenario.
here is the topology (assuming)
VM -> ESXi -> TOR switch (here occurs the encapsulation) -> internet -> TOR switch (de-encapsulation occurs) -> ESXi -> VM
From VM to TOR switch is going to be VLAN, TOR switch needs to encapsulate VXLAN and the destination TOR switch de-encapsulates the encapsulation and then we are going to vlan traffic again. The TOR switches are the responsible ones for the encapsulation process.
NSX will provide the overlay inside the DC or as mentioned, you can use the standalone edge and run a L2 VPN. Now, my question is, you are not using overlay inside your DC and why are you planning to overlay over the WAN.
Hi, yes the solution in place is as you describe: VM -> ESXi -> TOR switch (here occurs the encapsulation) -> internet -> TOR switch (de-encapsulation occurs) -> ESXi -> VM
So the above is the typical VXLAN deployment and straightforward.
if you wanted to extend the above so that VXLAN was also within the DC and down to the hosts, how would that be achieved without NSX?
As far as I know, the way to use your VXLAN will be using a L2 VPN. And here is my reasons:
- If you want to use VXLAN inside your DC you will need to use NSX-V, which is not longer supported.
- NSX-T (now in NSX-T version 4, is re-branded as NSX only) uses Geneve.
- In the internal communication, we have the concept of transport zones, you are going to have overlay and vlan ones. The overlay is the internal transport zone for overlay VNIS and the VLAN is the transport zone for the uplinks.
- Now, there is the concept of Federation,
with federation we can have multiple sites using global devices.
But to answer your question, in order to have overlay you need NSX to achieve it.
Hope this answers your question.
That is my understanding, you can leverage products like ACI to integrate with vCenter so that the VLAN that eventually makes to a VNI is automatically deployed. Otherwise you'll have to manually do the deployment and mapping
In a VMware vMSC (vSphere Metro Storage Cluster) setup using VXLAN for network virtualization and spanning multiple datacenters with Cisco Catalyst switches and ISL (Inter-Switch Link) connectivity, you can indeed configure VXLAN without VMware NSX by relying on the capabilities of the physical network switches. Below are the key steps you would typically need:
1. **VXLAN Configuration on ESXi Hosts**:
- Ensure that ESXi hosts in both datacenters have VXLAN enabled and are running compatible versions of VMware vSphere.
- Create and configure a VMware Distributed Switch (VDS) that spans both datacenters. This VDS will be used for VXLAN traffic.
2. **Distributed Switch Configuration**:
- Create a Distributed Port Group on the VDS specifically for VXLAN traffic.
- Enable IGMP Snooping on the Distributed Switch if required for multicast group management (which is common in VXLAN deployments).
3. **Physical Switch Configuration**:
- On the Cisco Catalyst switches, enable VXLAN support, which typically involves configuring the following:
- VLANs: Ensure that the VLANs used for VXLAN are properly configured on the switches.
- Multicast Support: Configure multicast group addresses for VXLAN traffic and ensure proper multicast routing if needed.
- MTU Size: Ensure that the MTU size is consistent across your entire network, including both ESXi hosts and physical switches. VXLAN adds encapsulation, so you might need a larger MTU size.
- ISL Configuration: As you mentioned, you have ISL between datacenters. Ensure that ISL is correctly configured to allow VXLAN traffic to traverse between the datacenters.
4. **VTEP Configuration (VXLAN Tunnel Endpoints)**:
- Configure Virtual Tunnel Endpoint (VTEP) interfaces on the physical switches. These interfaces are responsible for encapsulating and decapsulating VXLAN traffic as it enters and exits the physical network.
5. **VXLAN Segment Configuration**:
- Define VXLAN segments or VNIs (VXLAN Network Identifiers) as needed to segment your network. Each segment represents a separate logical network on top of the physical infrastructure.
6. **Routing Configuration (if required)**:
- If you need to route traffic between VXLAN segments or VNIs across datacenters, you'll need to configure routing on the physical switches.
7. **Testing and Validation**:
- Thoroughly test your VXLAN setup to ensure that traffic flows as expected between virtual machines in different VXLAN segments and across datacenters.
8. **Monitoring and Troubleshooting**:
- Implement monitoring and logging to assist in troubleshooting and maintaining the VXLAN network.
Remember that VXLAN configurations can be complex, and they should align with your specific network design and requirements. It's also crucial to consult the documentation for your Cisco Catalyst switches and follow best practices provided by VMware for VXLAN deployments in a vMSC environment.