chen_fred
Contributor
Contributor

vsphere 7 with kubenetes alway authorized fail

Jump to solution

Hi

I set up a vSphere 7 and NSX-T enviroment, and enable workload, all the step is successfully, and after create a new name space and a dev01 user.

Then I try to login by the user dev01. I cannot login and the error as the following Smiley Sad192.168.50.1 is the control plane node IP address in the workload management page. 192.168.30.100 is the control plane vm mangement VIP )

[33mWARN [0m[0026] Error occurred during HTTP request: Post https://192.168.50.1/wcp/login: dial tcp 192.168.50.1:443:

connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or e

stablished connection failed because connected host has failed to respond.

[31mERRO [0m[0026] Login failed: Post https://192.168.50.1/wcp/login: dial tcp 192.168.50.1:443: connectex: A connectio

n attempt failed because the connected party did not properly respond after a period of time, or established connection

failed because connected host has failed to respond.

Logged in successfully.

You have access to the following contexts:

   192.168.30.100

If the context you wish to use is not in this list, you may need to try

I check the log of wcpsvc.log in vCenter, there is alway said that the seems the autorization is passed but the Security Context is missing.

Do you have any advice?

The wcpsvc.log is as the following:

2020-06-16T02:43:04.534Z debug wcp [opID=5edf0f6d] List workloads for dev01@VSPHERE.LOCAL

2020-06-16T02:43:04.534Z debug wcp [opID=5edf0f6d] User dev01@VSPHERE.LOCAL is authorized to access fred.

2020-06-16T02:43:04.534Z debug wcp [opID=5edf0f6d] Got list of user workloads: [{fred 192.168.50.1}]

2020-06-16T02:43:04.534Z debug wcp [opID=vapi] Validating output

2020-06-16T02:43:04.534Z debug wcp [opID=vapi] Request processing complete

2020-06-16T02:43:04.534Z debug wcp [opID=vapi] Sending response with output {"output":[{"STRUCTURE":{"com.vmware.vcenter.namespaces.user.instances.summary":{"master_host":"192.168.50.1","namespace":"red"}}}]}

2020-06-16T02:43:05.93Z debug wcp healthz for 192.168.30.100 = "ok"

2020-06-16T02:43:23.905Z debug wcp Attempting VAC stats push

2020-06-16T02:43:23.905Z debug wcp Pushing VAC data to endpoint: http://localhost:15080/analytics/telemetry/ph/api/hyper/send?_c=vsphere.gcm.1_0_0&_i=3ddbce68-1ffe-4...

2020-06-16T02:43:25.116Z debug wcp Rest client for vmodl2 API calls exists, checking session validity

2020-06-16T02:43:25.124Z debug wcp Rest client for vmodl2 API calls is still valid.

2020-06-16T02:43:25.169Z debug wcp Found appliance logging forwarding config: []

2020-06-16T02:44:05.934Z debug wcp healthz for 192.168.30.100 = "ok"

2020-06-16T02:45:05.939Z debug wcp healthz for 192.168.30.100 = "ok"

2020-06-16T02:45:36.959Z debug wcp [opID=vapi] opId was not present for the request

2020-06-16T02:45:36.959Z debug wcp [opID=vapi] Handling new request with input {"STRUCTURE":{"operation-input":{}}}

2020-06-16T02:45:36.959Z debug wcp [opID=vapi] Service specific authorization scheme for com.vmware.cis.session not found.

2020-06-16T02:45:36.959Z debug wcp [opID=vapi] Service specific authorization scheme for com.vmware.cis.session not found.

2020-06-16T02:45:36.959Z debug wcp [opID=vapi] Could not find package specific auth scheme for com.vmware.cis.session

2020-06-16T02:45:37.063Z debug wcp Got authz request for com.vmware.cis.session.create

2020-06-16T02:45:37.063Z debug wcp [opID=vapi] Searching for service com.vmware.cis.session

2020-06-16T02:45:37.064Z debug wcp [opID=vapi] Searching for operation create

2020-06-16T02:45:37.064Z debug wcp [opID=vapi] Validating input

2020-06-16T02:45:37.064Z debug wcp [opID=vapi] Invoking operation

2020-06-16T02:45:37.064Z info wcp [opID=5edf0f8a] Created session for dev01@%!s(*string=0xc0019b58f0)

2020-06-16T02:45:37.064Z info wcp [opID=5edf0f8a] Scheduling session cleanup in 2m26.935851009s

2020-06-16T02:45:37.064Z debug wcp [opID=5edf0f8a] Created session, returning session id

2020-06-16T02:45:37.064Z debug wcp [opID=vapi] Validating output

2020-06-16T02:45:37.064Z debug wcp [opID=vapi] Request processing complete

2020-06-16T02:45:37.064Z debug wcp [opID=vapi] Sending response with output {"output":{"SECRET":"*redacted*"}}

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Processing operation with opId wcp-authproxy-140706487955408

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Handling new request with input {"STRUCTURE":{"operation-input":{}}}

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Service specific authorization scheme for com.vmware.vcenter.namespaces.user.instances not found.

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Service specific authorization scheme for com.vmware.vcenter.namespaces.user.instances not found.

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Could not find package specific auth scheme for com.vmware.vcenter.namespaces.user.instances

2020-06-16T02:45:37.066Z info wcp Got session for dev01@VSPHERE.LOCAL

2020-06-16T02:45:37.066Z debug wcp Successfully validated session token.

2020-06-16T02:45:37.066Z debug wcp Got authz request for com.vmware.vcenter.namespaces.user.instances.list

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Searching for service com.vmware.vcenter.namespaces.user.instances

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Searching for operation list

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Validating input

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Invoking operation

2020-06-16T02:45:37.066Z debug wcp [opID=5edf0f8b] List workloads for dev01@VSPHERE.LOCAL

2020-06-16T02:45:37.066Z debug wcp [opID=5edf0f8b] User dev01@VSPHERE.LOCAL is authorized to access fred.

2020-06-16T02:45:37.066Z debug wcp [opID=5edf0f8b] Got list of user workloads: [{fred 192.168.50.1}]

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Validating output

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Request processing complete

2020-06-16T02:45:37.066Z debug wcp [opID=vapi] Sending response with output {"output":[{"STRUCTURE":{"com.vmware.vcenter.namespaces.user.instances.summary":{"master_host":"192.168.50.1","namespace":"red"}}}]}

2020-06-16T02:46:05.943Z debug wcp healthz for 192.168.30.100 = "ok"

2020-06-16T02:46:28.283Z error wcp [opID=vapi] Security Context missing in the request

2020-06-16T02:46:28.283Z debug wcp [opID=vapi] SecurityContext not passed in the request. Creating an empty security context

2020-06-16T02:46:28.283Z debug wcp [opID=vapi] opId was not present for the request

2020-06-16T02:46:28.283Z debug wcp [opID=vapi] Handling new request with input {"STRUCTURE":{"operation-input":{}}}

2020-06-16T02:46:28.283Z debug wcp [opID=vapi] Service specific authorization scheme for com.vmware.vapi.std.introspection.service not found.

2020-06-16T02:46:28.283Z debug wcp [opID=vapi] Service specific authorization scheme for com.vmware.vapi.std.introspection.service not found.

2020-06-16T02:46:28.284Z debug wcp [opID=vapi] Could not find package specific auth scheme for com.vmware.vapi.std.introspection.service

2020-06-16T02:46:28.284Z debug wcp [opID=vapi] Authn scheme Id is not provided but NO AUTH is allowed hence invoking the operation

2020-06-16T02:46:28.284Z error wcp [opID=vapi] SecurityCtx doesn't have property AUTHN_IDENTITY

2020-06-16T02:46:28.284Z error wcp [opID=vapi] Invalid authentication result

2020-06-16T02:46:28.284Z debug wcp [opID=vapi] Skipping authorization checks, because there is no authentication data for: com.vmware.vapi.std.introspection.service.list

0 Kudos
1 Solution

Accepted Solutions
chen_fred
Contributor
Contributor

I have got the issue and fixed it.

The issue is because of  Network MTU. The MTU of VLAN interface betewwn ESXi TEP vlan and Edge TEP vlan should also be   over 1600.

If the MTU is 1500, the Geneve packets will be defraged. So the API Server IP will not be logged in. from ESXi to Edge to Loadbalance then to external.

So before enable the workload management, it is necessary to use vmkping -I vmk10 <edge TEP IP> -S vxlan -s 1572 -d to test the network. If it is OK, you should not meet any problem.

View solution in original post

13 Replies
daphnissov
Immortal
Immortal

It looks like a networking issue so start doing your basic networking troubleshooting. Is MTU consistent? Can you ping? Is there packet loss? Etc.

0 Kudos
chen_fred
Contributor
Contributor

I can ping the control plane cluster IP but cannot use kubectl vsphere login to login.

The geneve tunnel is up and all the MTU are set to 9000.

0 Kudos
daphnissov
Immortal
Immortal

Add the switch -v 10 to the end of your kubectl vsphere login command and paste the output.

0 Kudos
chen_fred
Contributor
Contributor

I reinstall many times and cannot login successfuly everytime.

Today, I tried to change some network setting, and the result is same. All the status are successfuly and there is no error or failed in vCenter and NSX-T. but cannot login successfully.

The output of kubectl vsphere login -v 10 is as the following:

C:\Users\Administrator>kubectl vsphere login --server=https://192.168.60.33 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] User passed verbosity level: 10

[37mDEBU [0m[0000] Setting verbosity level: 10

[37mDEBU [0m[0000] Setting request timeout:

[37mDEBU [0m[0000] login called as: C:\Windows\system32\kubectl-vsphere.exe login --server=https://192.168.60.33 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] Creating wcp.Client for 192.168.60.33.

[36mINFO [0m[0120] Got unexpected HTTP error: Head https://192.168.60.33/sdk/vimServiceVersions.xml: read tcp 192.168.20.200:50950->192.168.60.33:443: wsarecv: An existing connection was forcibly clo

sed by the remote host.

[31mERRO [0m[0240] Error occurred during HTTP request: Get https://192.168.60.33/wcp/loginbanner: read tcp 192.168.20.200:50970->192.168.60.33:443: wsarecv: An existing connection was forcibly closed

by the remote host.

There was an error when trying to connect to the server.\n

Please check the server URL and try again. [31mFATA [0m[0240] Error while connecting to host 192.168.60.33: Get https://192.168.60.33/wcp/loginbanner: read tcp 192.168.20.200:50970->192.168.60.33:443:

wsarecv: An existing connection was forcibly closed by the remote host..

exit status 1

Interesting thing is that I have installed a netsted vsphere enviroment and enable workload. It is successfullly deployed and login.

0 Kudos
daphnissov
Immortal
Immortal

What is this address where you're trying to connect? You should be using the load balancer VIP for the supervisor control plane.

0 Kudos
chen_fred
Contributor
Contributor

It is the loadbalance IP of the control plane vm. The IP is displayed on the vcenter-workload management --clusters .

It is different with the first post is because I reinstalled the enviroment and change some IP address setting.

Seems I have not descripted it clearly.

Now the supervisor control plane vm has 2 sets of IPs.

1. Management IPs: 192.168.30.6--192.168.30.8, and the VIP is 192.168.30.5

2. K8S IPs: 10.244.0.194--10.244.0.196, and the Load balance VIP is 192.168.60.33, whichi is displayed in the page of vcenter-workload management-clusters.

I can ping 192.168.60.33

I tried to kubctl vsphere login 192.168.60.33, the output is as the folloing:

C:\Users\Administrator>kubectl vsphere login --server=https://192.168.60.33 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] User passed verbosity level: 10

[37mDEBU [0m[0000] Setting verbosity level: 10

[37mDEBU [0m[0000] Setting request timeout:

[37mDEBU [0m[0000] login called as: C:\Windows\system32\kubectl-vsphere.exe login --server=https://192.168.60.33 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] Creating wcp.Client for 192.168.60.33.

[36mINFO [0m[0120] Got unexpected HTTP error: Head https://192.168.60.33/sdk/vimServiceVersions.xml: read tcp 192.168.20.200:59543->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

[31mERRO [0m[0240] Error occurred during HTTP request: Get https://192.168.60.33/wcp/loginbanner: read tcp 192.168.20.200:59574->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

There was an error when trying to connect to the server.\n

Please check the server URL and try again. [31mFATA [0m[0240] Error while connecting to host 192.168.60.33: Get https://192.168.60.33/wcp/loginbanner: read tcp 192.168.20.200:59574->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host..

exit status 1

I tried to kubctl vsphere login 192.168.30.5m the output is as the following:

C:\Users\Administrator>kubectl vsphere login --server=https://192.168.30.5 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] User passed verbosity level: 10

[37mDEBU [0m[0000] Setting verbosity level: 10

[37mDEBU [0m[0000] Setting request timeout:

[37mDEBU [0m[0000] login called as: C:\Windows\system32\kubectl-vsphere.exe login --server=https://192.168.30.5 --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] Creating wcp.Client for 192.168.30.5.

[36mINFO [0m[0000] Does not appear to be a vCenter or ESXi address.

[37mDEBU [0m[0000] Got response:

[36mINFO [0m[0000] Using administrator@vsphere.local as username.

Password:

[37mDEBU [0m[0005] Got response: {"session_id": "eyJraWQiOiJENkM0QUVDMDEwRjhFNEM4MkNCMURBODlCRjcxN0ZFOUIyQUFBM0NBIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJBZG1pbmlzdHJhdG9yQHZzcGhlcmUubG9jYWwiLCJhdWQiOiJ2b

hcmUtdGVzOnZjOnZuczprOHMiLCJkb21haW4iOiJ2c3BoZXJlLmxvY2FsIiwiaXNzIjoiaHR0cHM6XC9cL2h4dmM3LnNobGFiLmxvY2FsXC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc

yYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiQ0FBZG1pbnNAdnNwaGVyZS5sb2NhbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb

hbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQmFzaFNoZWxsQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIlVzZXJzQHZzcGhlcmUubG9jYWwiXSwiZXhwIjoxNTkyOTEwMTk5LCJpYXQiOjE1OTI4NzQxOTksImp0aSI6ImY2NWM3YzMwLWUyMTAtNDM1OC1hZ

wLTkyNjU4Y2JjMWRhMiIsInVzZXJuYW1lIjoiQWRtaW5pc3RyYXRvciJ9.der_zo_IPAJ00gnifRA4vldQVOK63UzuDU8xJxyyexgW8nrdpy_MK27yrlHFT7aG9vAk2ZeovLozFfVzIzHbHoWVa1a6rPTop4gSgGn_PBNfchznbtgQ9IALi2HMqRrZxXEBIL4YHHAW

LJ1N0KFwVa8dNo8VqPZ2Fmw-FAYlT871VznWq6YMkbrVAYUnLtvziAJYp0Pd5wUmbNv7eGapOdpR522Ig-eHSt9JXxLd_THkPjdqDeQYLLuIkr6d_Ba-fE9Q8Dqk8QYxwnoWs2ObAGkyN24OxH824ZATOLsyr4Ddg89UkvIK3lSYcybDQQVq6RPsMtc4IqlxiqYEdG

"}

[37mDEBU [0m[0005] Found kubectl in $PATH

[36mINFO [0m[0005] kubectl version:

[36mINFO [0m[0005] Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.4-2+a00aae1e6a4a69", GitCommit:"a00aae1e6a4a698595445ec86aab1502a495c1ce", GitTreeState:"clean", BuildDate:

020-04-22T11:35:29Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"windows/amd64"}

[37mDEBU [0m[0005] Calling `kubectl config set-cluster 192.168.30.5 --server=https://192.168.30.5:6443 --insecure-skip-tls-verify`

[37mDEBU [0m[0005] stdout: Cluster "192.168.30.5" set.

[37mDEBU [0m[0005] stderr:

[37mDEBU [0m[0005] Calling kubectl.

[37mDEBU [0m[0005] stdout: User "wcp:192.168.30.5:administrator@vsphere.local" set.

[37mDEBU [0m[0005] stderr:

[37mDEBU [0m[0006] Got response: [{"namespace": "yelb", "master_host": "192.168.60.33"}]

[37mDEBU [0m[0006] Calling kubectl.

[37mDEBU [0m[0006] Calling kubectl.

[37mDEBU [0m[0006] Calling kubectl.

[37mDEBU [0m[0006] Creating wcp.Client for 192.168.60.33.

[33mWARN [0m[0126] Error occurred during HTTP request: Post https://192.168.60.33/wcp/login: read tcp 192.168.20.200:58700->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

[31mERRO [0m[0126] Login failed: Post https://192.168.60.33/wcp/login: read tcp 192.168.20.200:58700->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

Logged in successfully.

[37mDEBU [0m[0126] Calling `kubectl config set-context 192.168.30.5 --cluster=192.168.30.5 --user=wcp:192.168.30.5:administrator@vsphere.local`

[37mDEBU [0m[0126] stdout: Context "192.168.30.5" created.

[37mDEBU [0m[0126] stderr:

[37mDEBU [0m[0126] Calling kubectl.

You have access to the following contexts:

   192.168.30.5

If the context you wish to use is not in this list, you may need to try logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

0 Kudos
daphnissov
Immortal
Immortal

You don't specify the protocol along with the server. Try like this and show the output. Manually input your username (administrator@vsphere.local) and password when prompted.

kubectl vsphere login --server=192.168.30.5 --insecure-skip-tls-verify -v 10

0 Kudos
daphnissov
Immortal
Immortal

I also had another look at your output. It looks like you have a namespace called "yelb" that is not responding, but it created the context to the supervisor cluster fine. You should be able to use the supervisor context successfully:

kubectl config use-context 192.168.30.5

kubectl get nodes

0 Kudos
chen_fred
Contributor
Contributor

The output is as the following :

seems it can connect the management IP and cannot login to loadbalance IP. and cannot operate the created namespace.

C:\Users\Administrator>kubectl vsphere login --server=192.168.30.5 --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] User passed verbosity level: 10

[37mDEBU [0m[0000] Setting verbosity level: 10

[37mDEBU [0m[0000] Setting request timeout:

[37mDEBU [0m[0000] login called as: C:\Windows\system32\kubectl-vsphere.exe login --server=192.168.30.5 --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] Creating wcp.Client for 192.168.30.5.

[36mINFO [0m[0000] Does not appear to be a vCenter or ESXi address.

[37mDEBU [0m[0000] Got response:

Username: administrator@vsphere.local

[36mINFO [0m[0011] Using administrator@vsphere.local as username.

Password:

[37mDEBU [0m[0015] Got response: {"session_id": "eyJraWQiOiJENkM0QUVDMDEwRjhFNEM4MkNCMURBODlCRjcxN0ZFOUIyQUFBM0NBIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJBZG1pbmlzdHJhdG9yQHZzcGhlcmUubG9jYWwiLCJhdWQiOiJ2bXd

hcmUtdGVzOnZjOnZuczprOHMiLCJkb21haW4iOiJ2c3BoZXJlLmxvY2FsIiwiaXNzIjoiaHR0cHM6XC9cL2h4dmM3LnNobGFiLmxvY2FsXC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3R

yYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiQ0FBZG1pbnNAdnNwaGVyZS5sb2NhbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2N

hbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQmFzaFNoZWxsQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIlVzZXJzQHZzcGhlcmUubG9jYWwiXSwiZXhwIjoxNTkyOTU4NDQ3LCJpYXQiOjE1OTI5MjI0NDcsImp0aSI6IjM5NzZiYjQ4LTg2OGYtNDFiMC1iMmY

4LTdjYTRiODljMjM3NSIsInVzZXJuYW1lIjoiQWRtaW5pc3RyYXRvciJ9.Y3LSYvgugf33GZSyRcpO0KAVjH4QEA745XEknqjYoW2GxNXhJIruhKtP7OIiqAmfJs6fpPk7kTOuc53HVDPW5XEIP3TZqV2Z84nrH0P0vTHpbD4AcsrZPt3gsplSAZpRa5GshouzsFHPtG

E1R6rOqigaB3yiTxJBH-QK3y9NAqwZdDSLjtnTJjlUGUfsIpIWfTRdZXGGr4EtXtWMwgVyMfWtNOfClfy7WJxllfNfcYHMG9O_wxsPkR029gaDt_PRlxBP1hZUWRb6DTDZJPPsxNvCRFCUiF6ijWKuSInN05xlpBfyV_jRKBE1f88QhCqX9SNAXvP6EVyV5495FHQ8aQ

"}

[37mDEBU [0m[0015] Found kubectl in $PATH

[36mINFO [0m[0015] kubectl version:

[36mINFO [0m[0015] Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.4-2+a00aae1e6a4a69", GitCommit:"a00aae1e6a4a698595445ec86aab1502a495c1ce", GitTreeState:"clean", BuildDate:"2

020-04-22T11:35:29Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"windows/amd64"}

[37mDEBU [0m[0015] Calling `kubectl config set-cluster 192.168.30.5 --server=https://192.168.30.5:6443 --insecure-skip-tls-verify`

[37mDEBU [0m[0015] stdout: Cluster "192.168.30.5" set.

[37mDEBU [0m[0015] stderr:

[37mDEBU [0m[0015] Calling kubectl.

[37mDEBU [0m[0015] stdout: User "wcp:192.168.30.5:administrator@vsphere.local" set.

[37mDEBU [0m[0015] stderr:

[37mDEBU [0m[0016] Got response: [{"namespace": "yelb", "master_host": "192.168.60.33"}]

[37mDEBU [0m[0016] Calling kubectl.

[37mDEBU [0m[0016] Calling kubectl.

[37mDEBU [0m[0016] Calling kubectl.

[37mDEBU [0m[0016] Creating wcp.Client for 192.168.60.33.

[33mWARN [0m[0166] Error occurred during HTTP request: Post https://192.168.60.33/wcp/login: read tcp 192.168.20.200:56441->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

[31mERRO [0m[0166] Login failed: Post https://192.168.60.33/wcp/login: read tcp 192.168.20.200:56441->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

Logged in successfully.

[37mDEBU [0m[0166] Calling `kubectl config set-context 192.168.30.5 --cluster=192.168.30.5 --user=wcp:192.168.30.5:administrator@vsphere.local`

[37mDEBU [0m[0166] stdout: Context "192.168.30.5" created.

[37mDEBU [0m[0166] stderr:

[37mDEBU [0m[0166] Calling kubectl.

You have access to the following contexts:

   192.168.30.5

If the context you wish to use is not in this list, you may need to try

logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

C:\Users\Administrator>kubectl config use-context 192.168.30.5

Switched to context "192.168.30.5".

C:\Users\Administrator>kubectl get nodes

NAME                               STATUS   ROLES    AGE   VERSION

422858cc0de3dc698c4b423a89c885cb   Ready    master   30h   v1.17.4-2+a00aae1e6a4a69

42286a45de5f6a2951887c7e7e49f3a3   Ready    master   30h   v1.17.4-2+a00aae1e6a4a69

4228ddcd096d5617fac0cceb71d0877e   Ready    master   30h   v1.17.4-2+a00aae1e6a4a69

hx01                               Ready    agent    30h   v1.17.4-sph-091e39b

hx03                               Ready    agent    30h   v1.17.4-sph-091e39b

0 Kudos
daphnissov
Immortal
Immortal

30.5 should be the VIP fronting the supervisor control plane. I don't know what that other IP is. But logging in to the supervisor clearly works, just that other namespace does not. Delete the namespace and redeploy.

0 Kudos
chen_fred
Contributor
Contributor

after I deply a new namespacem it is same, can not login.

C:\Users\Administrator>kubectl vsphere login --server=192.168.30.5 --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] User passed verbosity level: 10

[37mDEBU [0m[0000] Setting verbosity level: 10

[37mDEBU [0m[0000] Setting request timeout:

[37mDEBU [0m[0000] login called as: C:\Windows\system32\kubectl-vsphere.exe login --server=192.168.30.5 --insecure-skip-tls-verify -v 10

[37mDEBU [0m[0000] Creating wcp.Client for 192.168.30.5.

[36mINFO [0m[0000] Does not appear to be a vCenter or ESXi address.

[37mDEBU [0m[0000] Got response:

Username: administrator@vsphere.local

[36mINFO [0m[0006] Using administrator@vsphere.local as username.

Password:

[37mDEBU [0m[0010] Got response: {"session_id": "eyJraWQiOiJENkM0QUVDMDEwRjhFNEM4MkNCMURBODlCRjcxN0ZFOUIyQUFBM0NBIiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJBZG1pbmlzdHJhdG9yQHZzcGhlcmUubG9jYWwiLCJhdWQiOiJ2bXd

hcmUtdGVzOnZjOnZuczprOHMiLCJkb21haW4iOiJ2c3BoZXJlLmxvY2FsIiwiaXNzIjoiaHR0cHM6XC9cL2h4dmM3LnNobGFiLmxvY2FsXC9vcGVuaWRjb25uZWN0XC92c3BoZXJlLmxvY2FsIiwiZ3JvdXBfbmFtZXMiOlsiTGljZW5zZVNlcnZpY2UuQWRtaW5pc3R

yYXRvcnNAdnNwaGVyZS5sb2NhbCIsIkFkbWluaXN0cmF0b3JzQHZzcGhlcmUubG9jYWwiLCJFdmVyeW9uZUB2c3BoZXJlLmxvY2FsIiwiQ0FBZG1pbnNAdnNwaGVyZS5sb2NhbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2N

hbCIsIlN5c3RlbUNvbmZpZ3VyYXRpb24uQmFzaFNoZWxsQWRtaW5pc3RyYXRvcnNAdnNwaGVyZS5sb2NhbCIsIlVzZXJzQHZzcGhlcmUubG9jYWwiXSwiZXhwIjoxNTkyOTU5OTc4LCJpYXQiOjE1OTI5MjM5NzgsImp0aSI6ImEyMjIzNWEzLTI4YjktNDhjMC1hMjE

0LWQ0ZWVmZDM3ZDQ5ZiIsInVzZXJuYW1lIjoiQWRtaW5pc3RyYXRvciJ9.XIM0GIq4Ot8EC-vdZ4CUVmxIpCYRlFSfEpDvYTAeDp-9xU72p8aFxZwsd0zH2sgpSFqeE-mGQEWvCcXsM62u7BNwNlSjeggNGuPq4Kh8YM1LTx-4NGM9E4n65NzrO4DHiXoiwPcumBo_Kz

WZqwTaZnI4VVzPETehnFzPWC4LxIfcvalDFrHdG7eNsm3HuoZd46IqvBYPypaaWJ07q5fa1tmXMUdHy0-n04mvj9YBTEJoGrmYajjOSnmA0iRD8XWHwgnSZi3Jkq_eORkKDaqEaMcgO7swvcfnCtchWlVDQPF7XwmhTEt6dNLmzGThe_sDTU2AAWfLKCetWJpohbOoMw

"}

[37mDEBU [0m[0010] Found kubectl in $PATH

[36mINFO [0m[0010] kubectl version:

[36mINFO [0m[0010] Client Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.4-2+a00aae1e6a4a69", GitCommit:"a00aae1e6a4a698595445ec86aab1502a495c1ce", GitTreeState:"clean", BuildDate:"2

020-04-22T11:35:29Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"windows/amd64"}

[37mDEBU [0m[0010] Calling `kubectl config set-cluster 192.168.30.5 --server=https://192.168.30.5:6443 --insecure-skip-tls-verify`

[37mDEBU [0m[0010] stdout: Cluster "192.168.30.5" set.

[37mDEBU [0m[0010] stderr:

[37mDEBU [0m[0010] Calling kubectl.

[37mDEBU [0m[0010] stdout: User "wcp:192.168.30.5:administrator@vsphere.local" set.

[37mDEBU [0m[0010] stderr:

[37mDEBU [0m[0010] Got response: [{"namespace": "fred-namespace", "master_host": "192.168.60.33"}]

[37mDEBU [0m[0010] Calling kubectl.

[37mDEBU [0m[0010] Calling kubectl.

[37mDEBU [0m[0011] Calling kubectl.

[37mDEBU [0m[0011] Creating wcp.Client for 192.168.60.33.

[33mWARN [0m[0161] Error occurred during HTTP request: Post https://192.168.60.33/wcp/login: read tcp 192.168.20.200:57086->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

[31mERRO [0m[0161] Login failed: Post https://192.168.60.33/wcp/login: read tcp 192.168.20.200:57086->192.168.60.33:443: wsarecv: An existing connection was forcibly closed by the remote host.

Logged in successfully.

[37mDEBU [0m[0161] Calling `kubectl config set-context 192.168.30.5 --cluster=192.168.30.5 --user=wcp:192.168.30.5:administrator@vsphere.local`

[37mDEBU [0m[0162] stdout: Context "192.168.30.5" modified.

[37mDEBU [0m[0162] stderr:

[37mDEBU [0m[0162] Calling kubectl.

You have access to the following contexts:

   192.168.30.5

If the context you wish to use is not in this list, you may need to try

logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

C:\Users\Administrator>kubectl config use-context fred-namespace

error: no context exists with the name: "fred-namespace"

C:\Users\Administrator>

0 Kudos
daphnissov
Immortal
Immortal

I don't understand where this 60.33 address is coming from. What is this address?

0 Kudos
chen_fred
Contributor
Contributor

I have got the issue and fixed it.

The issue is because of  Network MTU. The MTU of VLAN interface betewwn ESXi TEP vlan and Edge TEP vlan should also be   over 1600.

If the MTU is 1500, the Geneve packets will be defraged. So the API Server IP will not be logged in. from ESXi to Edge to Loadbalance then to external.

So before enable the workload management, it is necessary to use vmkping -I vmk10 <edge TEP IP> -S vxlan -s 1572 -d to test the network. If it is OK, you should not meet any problem.

View solution in original post