VMware Networking Community
srodenburg
Expert
Expert
Jump to solution

DCHP relay with software L2 bridge not working from Physical to Virtual

Hello,

We have an 8 node NSX cluster (v6.3) and use a NSX software L2 bridge to connect to some physical devices. Normal connectivity between the physical and virtual worlds work fine.

We have one HA'd Logical Router Edge deployed.

The DLR firewall has been completely disabled (for other reasons).

We have no Edge firewalls in service.

We have a physical domain controller and a domain controller VM (both are 2012 R2).

Both are also DHCP servers and both have been entered as DHCP relay-servers in NSX (in Edges -> our Edge Logical Router -> Manage -> DHCP Relay -> "DHCP Relay Global Configuration".

Under the same page, under "DHCP Relay Agents", i added the vmnic for the correct network.

What works:

- Virtual machines can send DHCP requests to **both** physical **and** virtual DHCP servers, and everything works fine. DHCP requests from virtualized clients arrive at **both** DHCP servers (one will ignore them as it  runs in standby-failover mode) and replies travel back etc. Everything works, regardless of which DHCP server is the active one.

- We have no other issues whatsoever. Communications between everything virtual and everything physical works fine with software L2 bridging.

What does not work:

- Physical systems can get DHCP adresses from the physical DHCP server (which is in the same subnet etc.) but NOT from the virtualized DHCP server (also in same subnet). When tracing, I see that no DHCP packets from physical clients arrive at the virtualized DHCP server. The physical switches are configured with DHCP relays to the same 2 servers and that has always worked.

Under Logical Switches, we use Unicast replication mode for all Logical Switches/Virtual Wires. "Enable IP Discovery" and "Enable MAC learning" are enabled.

I suspect that our DHCP issue is caused by the replication mode we chose (Unicast). I can ping all virtual machines from all physical systems.

As i'm still learning, I do not entirely understand the difference between these modes (Multicast, Unicast and Hybrid) and cannot predict the impact of changing it  (sure, it was covered in the training but it all went so fast). I understand what multicast and unicast is, but not entirely in relation to NSX's replication mode.

Can anyone shed some light on this?

Reply
0 Kudos
1 Solution

Accepted Solutions
srodenburg
Expert
Expert
Jump to solution

Through magic, the problem resolved itself by upgrading v6.2.5 to v6.3.  I changed nothing, just did the upgrade and hoppla, problem gone.

In the 6.2.x versions, the problem was always there. Guess they "fixed" whatever the issue was in v6.3 (cannot not distill it from the release notes though).

Anyway. Case closed.

View solution in original post

Reply
0 Kudos
3 Replies
srodenburg
Expert
Expert
Jump to solution

Through magic, the problem resolved itself by upgrading v6.2.5 to v6.3.  I changed nothing, just did the upgrade and hoppla, problem gone.

In the 6.2.x versions, the problem was always there. Guess they "fixed" whatever the issue was in v6.3 (cannot not distill it from the release notes though).

Anyway. Case closed.

Reply
0 Kudos
iforbes
Hot Shot
Hot Shot
Jump to solution

Hi. I was wondering about how to go about ensuring my VXLAN vm's are able to be part of my non-VXLAN AD domain. You mentioned you have a physical DC and a vm DC. Is the vm DC installed in VXLAN network and the L2 Bridge facilitates the L2 connection between the two DC's? I guess my question is how are people handling infrastructure related services like AD/DNS that normally have resided in VLAN backed networks, to now communicate with VXLAN networks? L2 Bridging or just ESG L3 routing (OSPF, BGP)?

Reply
0 Kudos
srodenburg
Expert
Expert
Jump to solution

"Is the vm DC installed in VXLAN network and the L2 Bridge facilitates the L2 connection between the two DC's?"

Yes and yes.

Like in many networks, one cannot simply forget about the "old" physical environment. There will be all kinds of physical stuff left over on the same subnets/VLANs as before the installation of NSX and L2-Bridging is then needed to provide seamless connectivity between Virtual and physical. Many don't have the luxury of migrating systems into new subnets, allowing for L3 routing and negating the need for L2-bridging.

Customers without extreme performance requirements don't need hardware VTEPs as bridges (although sales-people will try to convince you otherwise...). I have found that the L2 bridging performance in NSX is more than adequate for things like inter-DC communications. So if you have some physical DC's, DNS appliances like Infoblox, WAN optimizers like Riverbed or whatever and things like HW loadbalancers, and their investment lifecycle does not allow replacing them with virtualized versions just yet, then one must do L2 bridging (or start an often high-impact, high-risk re-IP project where physical systems are moved into new subnets so one can start doing L3 routing. But such projects are often opening up a pandora's box as knowledge and documentation is often lacking, hardcoded IP addresses lurking left and right and nobody knows where they are all configured etc. etc. so folks tend to keep the HW environment as it is (resulting in L2 bridging).

Reply
0 Kudos