I'm currently researching the NSX-T 3.0 network topology for the TKGI 1.9 deployment.
After reading these pages: https://docs.pivotal.io/tkgi/1-9/console-prereqs-nsxt-automatednat.html and https://docs.pivotal.io/tkgi/1-9/console-prereqs-nsxt-byot.html, I believe that I have to raise some questions.
Doubt is: How does TKGI management console(let's call it MC) communicate with the tkgi management plane nodes(such as API and ops man)?
I guess that in most of the cases of new deployment of tkgi, the MC will be deployed in VLAN network(e.g. 192.168.40.x/24) at first while the T0 router's uplink will be assigned to different VLAN network(e.g. 192.168.9.x/24) with static route configured for 0.0.0.0/0.
(Assumes that the physical switch VLANs are all routable.)
However, if my guess is right then how will the MC connect to the TKGI management VMs which are normally deployed in a logical switch overlay network which is behind T0/T1 router? So the routing problem(gateway IP on MC) will be a trouble.
We're using TKGI 1.9 on NSX-T 2.5.x.
The Management console itself, which is used to deploy everything else is on our primary management VLAN outside of the NSX-T environment. The TKGI API, Bosh, Harbor etc. all live on a Segment created by the management console for these systems. This segment is created of a T1 for our platform resources, which in turn is connected to the T0. Another dynamically created T1, for K8 resources, is created of the same platform T0. This has 4 segments for the K8 nodes and pods etc. Routing is created automatically as part of the deployment . The T0 routers uplinks are just the same as with any T0. i.e. 2 VLANs which then peer with your leaf switches via BGP.
You hit the nail on the head already - so let's tease this out a little bit.
The major question is - what network deployment model are you using? If you're using hybrid NAT without dynamic routing or all-NAT in general, you will want your management network to be in the same "routing table" as your TKGi environment, e.g. on a T1 subtending the same Tier-0.
VLANs add a bit of unnecessary complexity - forwarding doesn't need to go out to the ToRs, so it'll be a bit more efficient following the virtual topology. Unless you're doing anything that requires a SR, like firewalling, lb, etc on the management network, the Distributed Router does all the work for you - so having a separate Tier-1 router incurs minimal overhead.
It does get a bit weird - Tier-1 is going to NAT, but make NAT exceptions when flowing to and from the other Tier-1 gateway. This also has the nice side benefit of giving you a good place to do ol' network troubleshooting from if you're obfuscating
keep it simple, MC injects the config YAML to the Ops Manager with the inputs yo enter on it thru GUI, if the case is that you have a management pane on a segment a NAT config should be provided to keep communications, but NAT is something that sometimes adds some complexity that is why is prefer to haver TKGi tiles and MC on same VDS VLAN backend dvPortgroup.
hopes this helps
NSX-T BYOT with NAT mode un-checked in our case. Correct, the management network is in the same routing table as the TKGI environment. The routing tables are shared between the Platform T0. Nothing in TKGI ends up going North. SNAT happens on the dynamically created PKS-T1 which does SNAT between the PODs and Nodes
So that would be the "Hybrid NAT Topology" I mentioned before - https://docs.pivotal.io/tkgi/1-9/nsxt-topologies.html
So what I did for the MC network was create the segment as specified, but made the prefix for it routable similar to the diagram under "Hybrid".
PKS dynamically created the NAT and NO-NAT policies for me at stand-up, that may need to be created under N-S policies under the Tier-1 in your case.