Problem Statement: VM's connected to logical switches are unable to ping their gateway (LIF on DLR) however they are able to communicate with each other.
We have NSX 6.2 setup in our lab and the setup details are as follows
NSX - Management
NSX Manager (Management Cluster)
1 Controller (Management Cluster)
Management VDS - Tagged to VLAN 201
Compute VDS - Tagged to VLAN 504
Global Transport Zone spanned to both clusters
Management Transport Zone which is also spanned to both clusters
2 Logical Switches - WEB (Global Transport Zone in Unicast Mode)
TRANSIT(Management Transport Zone in Unicast Mode)
DLR - with one uplink to Transit Switch and One Internal Link to WEB switch
EDGE - with One Uplink to Management VDS and one internal Link connected to Transit Switch
The WEB Logical switch has two VM's attached to it.
Problem Statement: The VM's are unable to ping their gateway (LIF on DLR) however they are able to communicate with each other.
Can anyone pls. help in identifying the issue.
Thanks in advance.
The VMs can certainly talk, as long as both hosts are connected to the transport zone, because they are layer 2, so the router is irrelevant.
Not sure why you can't ping the router. Have you confirmed your configuration? DLR internal interface has an IP on the WEB network. Honestly, you don't need anything else. You should be able to delete your Edge, Mgmt transport (why do you have a separate transport?), and TRANSIT logical switch. As soon as the WEB LS is connected to an internal interface on the DLR with a valid IP, you should immediately have connectivity. Then you can reconnect the Edge and confirm you can ping that. Then connect the Edge uplink and confirm you can ping outside.
Are you using DFW in this at all? Something I've seen a few times is when people create rules based on VMs eg web can talk to web and expect the router interface to be included because it's part of the same subnet.
We aren't using DFW and as suggested I will try to have only one transport zone and try.
Also not sure about why the VM's are not able to reach to the DLR because the configuration is pretty straight forward.
Will update soon.
Now I read this through again it is a bit odd with the two Transport Zones and the DLR being instantiated into both (if it can be). I've honestly never seen that config so it may well be the root of your problems as mentioned by other posters. I suspect that a DLR can only belong to one TZ but I've never seen that question/scenario addressed before. If that is the case then it's odd that the UI doesn't mask the Logical Switches where they shouldn't be available.
Can the VMs get to the Edge?
Let us know how it goes!
Is the DLR Firewall enabled? You may need to disable the Firewall on the DLR
The DLR Control VM can protect its Management or Uplink interfaces with the built in firewall. For any device that needs to communicate with the DLR Control VM itself we will need a firewall rule to approve it.
For example SSH to the DLR control VM or even OSPF adjacencies with the upper router will need to have a firewall rule. We can Disable/Enable the DLR Control VM firewall globally.
Note: do not confuse DLR Control VM firewall rule with NSX-v distributed firewall rule. The following image shows the firewall rule for DLR Control VM.
That shouldn't affect the DLR internal LIF the poster is trying to reach.
NSX does not allow connection of VM's in different transport zones. It might be possible that you are able to ping the VM's because they are on the same host.
Can you check
1) If management VTEP's are able to communicate with compute VTEP's
2) type show interface command on DLR and identify the interface ,making sure it is up.
3) type in debug packet dispaly interface <vNIC_name> to check if you are receiving any packet after initiating a contiuous ping to gateway from VM
Thanks for the response.
We resolved the issue post restarting the netcpa daemon on the ESXi host.