VMware Cloud Community
sghose0007
Enthusiast
Enthusiast
Jump to solution

vSphere With Kubernetes deployment issues

Hi All,

I have enabled the Workload management with Inress - 192.168.1.0/24 and Egress - 192.168.2.0/24 (vlan 2) and my TEP pool is under 192.168.0.1/24 (vlan3) ip pool and this network are routable.

Able to ping the interface form the external network and the Edge TEP Ip address from the ESXi host

Now I have Control plane node IP address - 192.168.1.1

And 3 Control plane VM having IP address of - 192.168.0.124, 125,126 and 127

So I can browse the 192.168.0.124 and the others IP , and able to get the Kubernetes CLI page

But while browsing the Control plane node IP address - 192.168.1.1 (Able to ping this)  I dont get to the same page, also created a namespace - demo-app-01 which will also browse from the same IP .

But unable to do so

Require an urgent help with some ideas how to fix this

1 Solution

Accepted Solutions
jasonbochedell
Enthusiast
Enthusiast
Jump to solution

sghose0007​ I finally got everything sorted for this particular environment over the weekend and everything is up and running properly now. I plan on compiling a few of the things I learned along the way in a blog post tomorrow if I have time.

The issue in this environment was related to jumbo frames configuration. Not at layer 2, but at layer 3 intervlan routing.

So for example:

Layer 2:

ESXi hosts can vmkping with jumbo MTU frames each other on the same host TEP VLAN. (be sure to use the -d paremeter, very important)

Layer 3:

ESXi hosts can vmkping from host TEP VLAN to the edge uplinks on the edge TEP VLAN with standard MTU frames.

ESXi hosts can not vmkping from host TEP VLAN to the edge uplinks on the edge TEP VLAN with jumbo MTU frames. This was the problem.

While the switch ports were passing unfragmented jumbo frames at layer 2, the switch VLAN interfaces were not configured for jumbo frames so they weren't routing frames that were larger than 1500 MTU.

Once jumbo frames was enabled for intervlan routing, the ping tests were successful and everything fell into place. The NSX-T Geneve Tunnel requires a minimum of 1600 MTU.

It would also be wise to verify the overlay VLAN (this is the 3rd of the 3 required VLANs) supports jumbo frames as well and the associated distributed switch portgroup is configured as type VLAN Trunking (1-4094).

Jas

View solution in original post

0 Kudos
12 Replies
scott28tt
VMware Employee
VMware Employee
Jump to solution

This is on VMware Cloud Foundation right?


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
sghose0007
Enthusiast
Enthusiast
Jump to solution

No..Below is the environment used

ESXi host – 4 Cisco UCS B200 M4

Storage – Unity Array 10TB Datastore

vCenter Server Version – 7.0 15952599

ESXi version – VMware ESXi, 7.0.0, 15843807

NSX-T - Version 3.0.0.0.0.15946738

0 Kudos
scott28tt
VMware Employee
VMware Employee
Jump to solution

vSphere with Kubernetes is only supported on VCF at this time: Requirements to use vSphere with Kubernetes


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
sghose0007
Enthusiast
Enthusiast
Jump to solution

Project Pacific - vSphere with Kubernetes

There was a beta buid for it and now its available with ESXi 7.0

Configure vSphere with Kubernetes to Use NSX-T Data Center

0 Kudos
scott28tt
VMware Employee
VMware Employee
Jump to solution

You need VCF to get the license: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/vsphere/vmw-vsphere-pricing-packag...


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
sghose0007
Enthusiast
Enthusiast
Jump to solution

Hi Scott,

Lets assume its runnning on VCF.

My issue is the LB IP of the master control node VM's is not redirecting to the same Browser as - http-title: VMware - Download Kubernetes CLI Tools  the individual IPs of the Control node VM.

0 Kudos
jasonbochedell
Enthusiast
Enthusiast
Jump to solution

VCF license is not the problem.

I had this issue throughout the beta and now have the same problem with the GA bits. VMware support thinks the issue is with MTU somewhere. I've verified many times that the physical uplink ports pass unfragmented jumbo frames across all trunked VLANs.

0 Kudos
sghose0007
Enthusiast
Enthusiast
Jump to solution

Hi Jason,

Even I was thinking the same that VCF license shouldn't cause this problem..

I have verified multiple times and checked that the Physical MTU is set to 9000 and its trunked too.

The LB IP for the control plane VM doesn't redirect..

0 Kudos
jasonbochedell
Enthusiast
Enthusiast
Jump to solution

sghose0007​ I finally got everything sorted for this particular environment over the weekend and everything is up and running properly now. I plan on compiling a few of the things I learned along the way in a blog post tomorrow if I have time.

The issue in this environment was related to jumbo frames configuration. Not at layer 2, but at layer 3 intervlan routing.

So for example:

Layer 2:

ESXi hosts can vmkping with jumbo MTU frames each other on the same host TEP VLAN. (be sure to use the -d paremeter, very important)

Layer 3:

ESXi hosts can vmkping from host TEP VLAN to the edge uplinks on the edge TEP VLAN with standard MTU frames.

ESXi hosts can not vmkping from host TEP VLAN to the edge uplinks on the edge TEP VLAN with jumbo MTU frames. This was the problem.

While the switch ports were passing unfragmented jumbo frames at layer 2, the switch VLAN interfaces were not configured for jumbo frames so they weren't routing frames that were larger than 1500 MTU.

Once jumbo frames was enabled for intervlan routing, the ping tests were successful and everything fell into place. The NSX-T Geneve Tunnel requires a minimum of 1600 MTU.

It would also be wise to verify the overlay VLAN (this is the 3rd of the 3 required VLANs) supports jumbo frames as well and the associated distributed switch portgroup is configured as type VLAN Trunking (1-4094).

Jas

0 Kudos
sghose0007
Enthusiast
Enthusiast
Jump to solution

Thanks Jason for some guidance.

This is where I am now

Looks like an issue with the edge but ??

Edge - VC pings with higher then 1600 packet size pings

Edge - NSX works

Edge - Control plane VM IP's 0.0/24 works

Edge - Control Plane VM LB IP 1.0/27 higher then 1600 packet size doesn't pings

Edge - Host TEP IP doesn't pings

Edge - Edge TEP IP doesn't ping

But

Host - Host TEP IP works

Host - Edge TEP IP works

And in the uplink profiles - The MTU by default is 9000(Global MTU)

Ingress CIDRs

192.168.1.32/27

Egress CIDRs

192.168.1.64/27

192.168.1.32/27
192.168.1.64/27
0 Kudos
sghose0007
Enthusiast
Enthusiast
Jump to solution

Thanks Jas,

Having the different vlan for the HOST and Edge TEP pool fix the issue.

0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

When using the vDS 7, it is a requirement for the host transport nodes and edge transport nodes to use different networks that have cross-routing capabilities.

0 Kudos