You can add a static route in simple UI. Go to Networking > T0 Gateways and click on the 3 dots next to your T0 and chose edit. There you will see a routing section where you can set static routes, just like the screenshot below.
Was it helpful? Let us know by completing this short survey here.
Thanks Mauricio !
Since static routes are being discussed, I have a question on setup. I'm also running a lab environment. My physical world gateway/firewall/internet router is 10.20.8.1 (NAT example). My NSX-T lab transport nodes all have 4 nics. Two nics are left on vDS and two are dedicated for NSX-T.
My ESXi servers and vCenter hang off the vDS and use 10.20.8.1 as their default gateway for internet connectivity. I've successfully configured Tier-1 edge (for logical networks) and Tier-0 edge for N-S routing.
I also don't want to configure BGP and would prefer to just configure static routes between my Tier-0 and physical world gateway/firewall/internet router. I've configured a static route on the Tier-0 for both 0.0.0.0/0 (internet) and 10.20.8.0/22 (physical network). I configured a next hop address of 10.20.8.1 (is that correct?) using the uplink interface I configured (hanging off my vlan backed transport zone - IP: 10.20.8.253).
When I run a get-routes on the Tier-0 I can see all of the NSX-T logical networks I created (so I know Tier-1 is successfully advertising it's routes to Tier-0). From Tier-0 CLI I can ping 10.20.8.1 (although I'm not sure that's because Tier-0 mgmt interface is on 10.20.8.0/22?)
When I jump on a vm configured with a NSX-T logical network, I'd think I can ping physical network IP's now (i.e. IPs in 10.20.8.0/22) - but I can't. Is there anything missing from my static route setup? Do I have to also configure static routes on my physical world gateway/firewall/internet router for the NSX-T logical networks? In the end I'd like my vm's with logical networks to be able to communicate with my physical network and also get internet access - all via static routes.
When I jump on a vm configured with a NSX-T logical network, I'd think I can ping physical network IP's now (i.e. IPs in 10.20.8.0/22) - but I can't. Is there anything missing from my static route setup?
Your VMs attached to a NSX-T-backed segment are likely using (at least they should be) a different L3 schema than those in your physical network. What's probably happening is traffic is able to hit the external work from internally, but because you have no static route configured at your physical gateway side, that traffic has no way to get back in. So whatever L3 device you're using as a router at the physical gateway needs to have a static route that directs traffic to the T0 uplink on your edges (or HA VIP of the T0 if configured).
Thanks for the quick reply. I figured it might be something like that. I have another question (just to ensure I've configured things correctly). My setup follows this topology:
Now, I've deployed my Edge appliances to hang off the VDS like the above. In this specific architecture, the Edge VTEP needs to be in the same vlan as the Host Transport node VTEPs (which I've done). While I can successfully ping between my Host Transport node VTEPs, I cannot ping from the Edge VTEP to a Host Transport node VTEP. Looking at things more closely now is this because I need to ensure the physical switch port (P0 in the pic above), the VDS and the Edge Transport-PG (vNIC2 in pic above - hosting the Edge VTEP) all need to be configured with Jumbo Frames (a minimum 1600 MTU)? Presently, I don't have JFs configured on my VDS uplinks or port groups.
Could that also be causing issues for N-S traffic (along with the need to ensure my logical networks are configured on my upstream router with a static route)?
Your problem is likely due to you not configuring the MTU correctly on your vDS.
So, the Edge VTEP uplink needs to be configured with JFs? I'll give that a try.
If your edge is virtual then the connection to the transport VLAN shared with your transport nodes needs to be configured for an MTU of 1600. This isn't Jumbo Frames, but that's generally what's done.
Technically, anything over 1500 MTU (regular Ethernet) is Jumbo . Thanks again.
Thanks for the correction.