VMware Cloud Community
vapetri
Contributor
Contributor
Jump to solution

TKC creation error in VSphere Workload Management

Hi,

I have a lab environment with Workload Management enabled.

When i try to create my first TKC i see the control planes virtual machines (best-effort-xsmall) are not deployed, also workers are not as they wait for control plane  to be started.

The error is showed below:

Message: Namespace does not have access to VirtualMachineImage. imageName: photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a, contentLibraryUUID: 4377ec3f-47ef-4ede-c12b-090dfsa7e, namespace: lab1test Reason: ContentSourceBindingNotFound Severity: Error Status: False Type: VirtualMachinePrereqReady.

The content library is showed in the grafical interface and kubectl get tkr -n lab1test shows the images.

I haven't found anybody with this kind of error. Can somebody explain me what i am doing wrong?

Thank you!

----------------

I've added more details:

The VM images are visible:

kubectl get tkr
NAME VERSION READY COMPATIBLE CREATED UPDATES AVAILABLE
v1.20.12---vmware.1-tkg.1.b9a42f3 1.20.12+vmware.1-tkg.1.b9a42f3 True True 22h [1.21.6+vmware.1-tkg.1.b3d708a]
v1.20.8---vmware.1-tkg.2 1.20.8+vmware.1-tkg.2 True True 22h
v1.21.6---vmware.1-tkg.1.b3d708a 1.21.6+vmware.1-tkg.1.b3d708a True True 20h

 

namespace details

kubectl describe namespace lab1test:

Name: lab1test
Labels: kubernetes.io/metadata.name=lab1test
vSphereClusterID=domain-c6134
Annotations: vmware-system-resource-pool: resgroup-11037
vmware-system-resource-pool-cpu-limit:
vmware-system-resource-pool-memory-limit: 16384Mi
vmware-system-vm-folder: group-v11040
Status: Active

Resource Quotas
Name: lab1test
Resource Used Hard
-------- --- ---
requests.storage 0 200Gi
Name: lab1test-storagequota
Resource Used Hard
-------- --- ---
tanzu-grid-storage-policy.storageclass.storage.k8s.io/requests.storage 0 9223372036833775807

No LimitRange resource.

 

cluster yaml file is basic:

apiVersion: run.tanzu.vmware.com/v1alpha1
kind: TanzuKubernetesCluster
metadata:
name: lab1test-cluster
namespace: lab1test
spec:
distribution:
version: 1.21.6+vmware.1-tkg.1.b3d708a
topology:
controlPlane:
count: 1
class: best-effort-xsmall
storageClass: tanzu-grid-storage-policy
workers:


count: 2
class: best-effort-xsmall
storageClass: tanzu-grid-storage-policy

If i look for tkc status:

Name: lab1test-cluster
Namespace: lab1test
Labels: run.tanzu.vmware.com/tkr=v1.21.6---vmware.1-tkg.1.b3d708a
Annotations: <none>
API Version: run.tanzu.vmware.com/v1alpha2
Kind: TanzuKubernetesCluster
Metadata:
Creation Timestamp: 2022-02-11T09:21:09Z
Finalizers:
tanzukubernetescluster.run.tanzu.vmware.com
Generation: 1
Managed Fields:
API Version: run.tanzu.vmware.com/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:distribution:
.:
f:version:
f:topology:
.:
f:controlPlane:
.:
f:class:
f:count:
f:storageClass:
f:workers:
.:
f:class:
f:count:
f:storageClass:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-02-11T09:21:09Z
API Version: run.tanzu.vmware.com/v1alpha2
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.:
v:"tanzukubernetescluster.run.tanzu.vmware.com":
f:labels:
.:
f:run.tanzu.vmware.com/tkr:
f:status:
f:apiEndpoints:
f:conditions:
f:phase:
f:totalWorkerReplicas:
Manager: manager
Operation: Update
Time: 2022-02-11T09:21:24Z
Resource Version: 498980
Self Link: /apis/run.tanzu.vmware.com/v1alpha2/namespaces/lab1test/tanzukubernetesclusters/lab1test-cluster
UID: 5b140fe8-4cb6-4012-ac05-9be098fcd609
Spec:
Distribution:
Full Version: v1.21.6+vmware.1-tkg.1.b3d708a
Version: 1.21.6+vmware.1-tkg.1.b3d708a
Settings:
Network:
Cni:
Name: antrea
Pods:
Cidr Blocks:
192.168.0.0/16
Service Domain: cluster.local
Services:
Cidr Blocks:
10.96.0.0/12
Topology:
Control Plane:
Replicas: 1
Storage Class: tanzu-grid-storage-policy
Tkr:
Reference:
Name: v1.21.6---vmware.1-tkg.1.b3d708a
Vm Class: best-effort-xsmall
Node Pools:
Name: workers
Replicas: 2
Storage Class: tanzu-grid-storage-policy
Tkr:
Reference:
Name: v1.21.6---vmware.1-tkg.1.b3d708a
Vm Class: best-effort-xsmall
Status:
API Endpoints:
Host: 172.22.28.101
Port: 6443
Conditions:
Last Transition Time: 2022-02-11T09:21:33Z
Message: 1 of 2 completed
Reason: ContentSourceBindingNotFound @ Machine/lab1test-cluster-control-plane-jsk8j
Severity: Error
Status: False
Type: Ready
Last Transition Time: 2022-02-11T09:21:33Z
Message: 1 of 2 completed
Reason: ContentSourceBindingNotFound @ Machine/lab1test-cluster-control-plane-jsk8j
Severity: Error
Status: False
Type: ControlPlaneReady
Last Transition Time: 2022-02-11T09:21:24Z
Message: node pools [] are unknown, [] are failed, [workers] are updating
Reason: NodePoolsUpdating
Severity: Info
Status: False
Type: NodePoolsReady
Last Transition Time: 2022-02-11T09:21:24Z
Message: 0/1 Control Plane Node(s) healthy. 0/2 Worker Node(s) healthy
Reason: WaitingForNodesHealthy
Severity: Info
Status: False
Type: NodesHealthy
Last Transition Time: 2022-02-10T12:33:35Z
Status: True
Type: TanzuKubernetesReleaseCompatible
Last Transition Time: 2022-02-10T12:33:36Z
Reason: NoUpdates
Status: False
Type: UpdatesAvailable
Phase: failed
Total Worker Replicas: 2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal PhaseChanged 2m5s vmware-system-tkg/vmware-system-tkg-controller-manager/tanzukubernetescluster-status-controller cluster changes from creating phase to failed phase

 

if i look for vm control plane in tkc i see the error:

Name: lab1test-cluster-control-plane-jsk5j
Namespace: lab1test
Labels: capw.vmware.com/cluster.name=lab1test-cluster
capw.vmware.com/cluster.role=controlplane
Annotations: vsphere-cluster-module-group: control-plane-group
vsphere-tag: CtrlVmVmAATag
API Version: vmoperator.vmware.com/v1alpha1
Kind: VirtualMachine
Metadata:
Creation Timestamp: 2022-02-11T09:21:31Z
Finalizers:
virtualmachine.vmoperator.vmware.com
Generation: 1
Managed Fields:
API Version: vmoperator.vmware.com/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:vsphere-cluster-module-group:
f:vsphere-tag:
f:finalizers:
.:
v:"virtualmachine.vmoperator.vmware.com":
f:labels:
.:
f:capw.vmware.com/cluster.name:
f:capw.vmware.com/cluster.role:
f:ownerReferences:
.:
k:{"uid":"5978508a-cc10-49f1-8fda-537129f087a5"}:
.:
f:apiVersion:
f:blockOwnerDeletion:
f:controller:
f:kind:
f:name:
f:uid:
f:spec:
.:
f:className:
f:imageName:
f:networkInterfaces:
f:powerState:
f:resourcePolicyName:
f:storageClass:
f:vmMetadata:
.:
f:configMapName:
f:transport:
f:status:
.:
f:conditions:
Manager: manager
Operation: Update
Time: 2022-02-11T09:21:31Z
Owner References:
API Version: infrastructure.cluster.vmware.com/v1alpha3
Block Owner Deletion: true
Controller: true
Kind: WCPMachine
Name: lab1test-cluster-control-plane-2skvq-cn67q
UID: 5978508a-cc10-49f1-8fda-537129f087a5
Resource Version: 498947
Self Link: /apis/vmoperator.vmware.com/v1alpha1/namespaces/lab1test/virtualmachines/lab1test-cluster-control-plane-jsk5j
UID: 0584a0bc-84b2-4dea-8c46-390d133f984c
Spec:
Class Name: best-effort-xsmall
Image Name: photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a
Network Interfaces:
Network Name: tanzu127workload
Network Type: vsphere-distributed
Power State: poweredOn
Resource Policy Name: lab1test-cluster
Storage Class: tanzu-grid-storage-policy
Vm Metadata:
Config Map Name: lab1test-cluster-control-plane-2skvq-cn17q-cloud-init
Transport: ExtraConfig
Status:
Conditions:
Last Transition Time: 2022-02-11T09:21:31Z
Message: Namespace does not have access to VirtualMachineImage. imageName: photon-3-k8s-v1.21.6---vmware.1-tkg.1.b3d708a, contentLibraryUUID: 4377ec3f-47ef-4ede-c12b-090dfsa7e, namespace: lab1test
Reason: ContentSourceBindingNotFound
Severity: Error
Status: False
Type: VirtualMachinePrereqReady
Events: <none>

Any ideas?

Thank you

 

Labels (2)
Tags (2)
Reply
0 Kudos
1 Solution

Accepted Solutions
Sysadmind
Contributor
Contributor
Jump to solution

Hi, Hello, I had the same problem but it is already solved.

you need remove the security policy associated with the content library and redeploy de TKG.

BR

 

View solution in original post

2 Replies
Sysadmind
Contributor
Contributor
Jump to solution

Hi, Hello, I had the same problem but it is already solved.

you need remove the security policy associated with the content library and redeploy de TKG.

BR

 

vapetri
Contributor
Contributor
Jump to solution

Hi,

Thanks for the reply.

In mean time i also solved it recreating the TKG cluster.

Thank you for your help.

BR.

Reply
0 Kudos