VMware Beta Community
ccalvetbeta
Enthusiast
Enthusiast

Resizing worker pool not working?

I managed to create a new cluster, It is now in state ready.
It was initially provisioned with 3 control plane and 1 worker node.
I am trying to increase to two worker nodes.
In the resize wizard i select 2 for "Number of Nodes" and click submit.
I end up with the message "Acknowledged node pool resize request".

But after that nothing.
No new events or tasks.
The CSE journal doesn't seem to contain anything relevant to this request, only a "status check" of the cluster every minute.

ccalvetbeta_0-1659711841148.png

Is it a known issue or is it supposed to work?



Reply
0 Kudos
3 Replies
sakthi2019
VMware Employee
VMware Employee

Good to know that you have created a cluster. 
Resize issue is a known issue for beta. CSE polls the RDE at regular interval to pickup any change.
You may check using https://
{{base_url}}/cloudapi/1.0.0/entities/types/vmware/capvcdCluster/1.1.0 to see the RDE got updated : entity->spec->capiYaml has got the updated replicas for worker node.

sakthi2019_0-1659716060747.png

 




 

 

Reply
0 Kudos
lzichong
VMware Employee
VMware Employee

Following up on what sakthi2019 said, if you notice the replicas has changed after checking the RDE in entity->spec->capiYaml (you could control/command F and search for replicas if possible as well), then it is possible that the pod responsible for applying updates (rdeprojector) has stopped reconciliation. This is a known issue of the RDEProjector but has been fixed for GA. In order to resolve this you may need to delete the rdeprojector pod and let it restart. To do this, you will need to access the cluster with Kubernetes.

1. Download the Kubernetes config associated to your cluster from the UI.

2. After downloaded Kubernetes config, make sure you know the path of where it is as it needs to be specified when using Kubernetes commands.

3. Get the list of pods running by running 'kubectl get pods -A --kubeconfig=/path/of/kubernetes-config.txt' 

4. Look for a pod with a name starting with 'rdeprojector-', and perform a delete command using 'kubectl delete pod -n rdeprojector-system rdeprojectorPodName --kubeconfig=/path/of/kubernetes-config.txt', this should force restart rdeprojector as it will automatically bring up a new pod after, and after sometime the new updates should apply.

ccalvetbeta
Enthusiast
Enthusiast

Finally it has worked without any actions on my side.
So it is just slow to start.

Reply
0 Kudos