VMware Cloud Community
abdurrahman-12
Contributor
Contributor

vSphere with Tanzu TKC Creation Error

Hi everyone

I am deploying a TKC cluster using v1alpha2 API on vSphere 7U3.

The control plane and worker VMs are deploying, but the cluster pending on Creating phase.

When I describe the cluster using:

kubectl describe tkc tkgs-prod-cluster -n ns-prod

I am getting: 
......
Conditions:
Last Transition Time: 2023-02-16T05:54:40Z
Message: node pools [] are unknown, [] are failed, [worker-nodepool-a1] are updating
Reason: NodePoolsUpdating
Severity: Info
Status: False
Type: Ready
..............

I have created a cluster with 1 control plane, 3 worker nodes. but when I run command kubectl get nodes inside the cluster it lists as fllowing:

tkgs-prod-cluster-control-plane-9w8zz Ready control-plane,master 32m v1.23.8+vmware.3
tkgs-prod-cluster-worker-nodepool-a1-7bq9v-65b6485f56-9k72d Ready <none> 28m v1.23.8+vmware.3
tkgs-prod-cluster-worker-nodepool-a1-7bq9v-65b6485f56-bq89s Ready <none> 28m v1.23.8+vmware.3

there is only 2 worker nodes! but in vSphere inventory I see 3 worker VMs

when I run kubectl get po -A all the pods are running and fine.

 

Anyone knows what can be the issue?

Labels (4)
0 Kudos
1 Reply
codedoings
Contributor
Contributor

Late reply but you can get some more information regarding the provisioning of the virtual machines for the kubernetes cluster by doing a kubectl get machines -n <your vsphere namespace> while being on the supervisor context.

 

There is a pretty nice troubleshooting guide available here: https://core.vmware.com/blog/tanzu-kubernetes-grid-service-troubleshooting-deep-dive-part-3

0 Kudos