VMware Networking Community
Cambouch
Contributor
Contributor
Jump to solution

NSX-T Load Balancer is not working (Vsphere with Tanzu)

Hello,

I am enabling vsphere with Tanzu in my cluster. It creates automatically the load balancer on NSX-T side so no configuration is done manually, all set-up was done automatically.

For my environment, my workload domain is 10.244.0.0/21 , VIP network is 172.16.80.32/27.

The backend servers work normally and are accessible but the control node ip( 172.16.80.34) is not reachable although it's pingable and the specified port is open( i try telnet on it).

 Any tips about what should i check?

Reply
0 Kudos
1 Solution

Accepted Solutions
Cambouch
Contributor
Contributor
Jump to solution

Thank you for your collaboration dragance.
The problem was solved after reconfiguring the MTU between the Edge TEP and ESXi TEP and set it to 9000.

View solution in original post

Reply
0 Kudos
8 Replies
dragance
VMware Employee
VMware Employee
Jump to solution

Can you please give more details on how 172.16.80.34 is not reachable but it's ping-able? How are you testing that?

Useful link for quick demo-ing/proof of concept on Tanzu, which you can find useful :
https://core.vmware.com/resource/tanzu-proof-concept-guide#tanzu-basic

BR,

Dragan

Reply
0 Kudos
Cambouch
Contributor
Contributor
Jump to solution

Hello Dragance,

Thank you for your interaction.

I am testing the IP from a machine that i attached under a tier-1 created under the same Tier-0 that contains my workload domain.

 

Reply
0 Kudos
dragance
VMware Employee
VMware Employee
Jump to solution

Thank you,

that's quite normal setup and if I get it correctly from that machine under T1 you're testing and getting reachable IP inside workload domain.

Are you announcing that subnet from your T0 to the rest of network somehow (dynamic or static routing), so that rest of network knows how to reach Tanzu workloads?

BR,

Dragan

Reply
0 Kudos
Cambouch
Contributor
Contributor
Jump to solution

Yes, i added a static route 0.0.0.0/0 that has 172.16.80.254(gateway) as next hop and enabled route advertisement in Tier-0 so it can reach underlying segments.

 

Reply
0 Kudos
dragance
VMware Employee
VMware Employee
Jump to solution

You don't need to add any routing inside NSX domain like you mentioned - you need dynamic/static routing on T0 for providing outside NSX connectivity (north-south) to it. There are full routing features inside NSX on announcing segments from T1 up to T0, networks from T0 to the outside etc.

I have lab environment build exactly like you described yours and that automatic LB stuff inside NSX is working like it should. Maybe couple of explanations on different Tanzu networks that you're using is going to help a little bit:

namespace (IP addresses for workloads attached to Supervisor cluster Namespace segments - if NAT mode is unchecked, then this IP CIDR should be a routable IP address) should be at least /22 or better /20;

service CIDR (Internal block from which IPs for k8s cluster IP Services will be allocated - it cannot overlap with IPs of Workload Management components (VC, NSX, ESXs, Management DNS, NTP) - ie 10.96.0.0/22);

ingress CIDR (used to allocate IP addresses for services that are published via service type load balancer and Ingress across all Supervisor Cluster namespaces) - will be published from NSX domain to external world - basically incoming traffic to this space will be translated to namespace network;

egress CIDR (allocate IP address for SNAT (Source Network Address Translation) for traffic exiting the Supervisor cluster namespace) - basically opposite from ingress network space for outbound traffic and also published from NSX routing domain;

management CIDR for obvious purpose - during k8s workload creation ONLY network manually created (can also be inside NSX segment).

BR,

Dragan

Tags (1)
Reply
0 Kudos
Cambouch
Contributor
Contributor
Jump to solution

Yeah, i am talking about the static route that i defined on my tier-0 that routes traffic to my physical network.

Here is a brief summary of the configuration that i set following the official documentation.

 

Reply
0 Kudos
Cambouch
Contributor
Contributor
Jump to solution

Thank you for your collaboration dragance.
The problem was solved after reconfiguring the MTU between the Edge TEP and ESXi TEP and set it to 9000.

Reply
0 Kudos
dragance
VMware Employee
VMware Employee
Jump to solution

Glad it worked out 👍

We concentrated on Tanzu/ALB stuff but probably would come to overlay at one point for resolution 😊

BR,

Dragan

Reply
0 Kudos