5 Replies Latest reply on Nov 20, 2019 4:56 PM by xiangfeiz

    VIO 6 Upgrade

    jet1981 Novice

      I am upgrading from VIO 5.1.0.2 and completed step 10 of the migration procedure. However, after running the commands, the deployment never started...

      10. Apply the upgrade configuration file to the new VMware Integrated OpenStack deployment.

      kubectl -n openstack apply -f restore.yaml

      This was the output of the two kubectl commands.

      root@photon-machine [ ~ ]# kubectl -n openstack create -f cluster.yaml

      secret/managedpasswords created

      viocluster.vio.vmware.com/cluster1 created

      viomachineset.vio.vmware.com/manager1 created

      viomachineset.vio.vmware.com/controller1 created

      vcenter.vio.vmware.com/vcenter1 created

      nsx.vio.vmware.com/nsx1 created

      viosecret.vio.vmware.com/viosecret1 created

      root@photon-machine [ ~ ]# kubectl -n openstack apply -f restore.yaml

      restorationrequest.vio.vmware.com/restorationrequest1 created

      I don't see any of the pods in an error state.

       

      Is there a step missing from the documentation to kick off the deployment? Am I missing something?

       

      Thank you

        • 1. Re: VIO 6 Upgrade
          xiangfeiz Enthusiast
          VMware Employees

          The last command triggers restoration process and deployment should start in the background.

           

          Can you check 'viocli get deployment'? If it does not show system in Running state, please use 'kubectl -n openstack get pods' to check pods' status. You can start with pods with name starting with 'restore-' to see if the restoration is successful.

          • 2. Re: VIO 6 Upgrade
            jet1981 Novice

            Thanks for the response. 'viocli get deployment' did not show any running states. Also there were no pods that started with 'restore-' in their names. I went ahead and deleted the vio 6 appliance, redeployed the appliance, and tried the migration again. This time the deployment kicked off and the 3 controllers were built. However, no openstack services seemed to start. using 'viocli get deployment' yielded `no objects of kind OSDeployment found in the namespace openstack`. 'kubectl -n openstack get pods showed all containers either running or completed. Again, no pods with a name similar to 'restore-'.

             

            root@photon-machine [ ~/.ssh ]# kubectl -n openstack get pods

            NAME                                                              READY   STATUS      RESTARTS   AGE

            cluster-controller-bbcdf45df-vmdrh                                1/1     Running     0          102m

            create-viocluster-cluster1-13e8073e-0a88-11ea-aaae-005056b89bx9   0/1     Completed   0          49m

            fluentbit-2wv64                                                   1/1     Running     0          100m

            fluentbit-7km9r                                                   1/1     Running     0          45m

            fluentbit-gdfrw                                                   1/1     Running     0          46m

            fluentbit-hcf2h                                                   1/1     Running     0          48m

            fluentd-htsvk                                                     1/1     Running     0          45m

            fluentd-lh5kh                                                     1/1     Running     0          46m

            fluentd-x6nc5                                                     1/1     Running     0          48m

            fluentd-z99nl                                                     1/1     Running     0          100m

            helm-fluent-logging-fluent-logging-l6l4zmd66m-8d8c4               0/1     Completed   0          100m

            license-controller-7d86788f99-bmc5f                               1/1     Running     0          102m

            openstack-controller-7b4c99f5c9-q9vgm                             1/1     Running     0          102m

            patching-controller-95576f764-2cv4r                               1/1     Running     0          103m

            rnb-controller-77df6d7dbc-2vh9j                                   2/2     Running     0          102m

            status-controller-7c7cfdfdfb-5mt6z                                2/2     Running     0          102m

            valid-viocluster-cluster1-13e8073e-0a88-11ea-aaae-005056b22mbjj   0/1     Completed   0          49m

            valid-viomachineset-controller1-pcwjr8scdg-jslnv                  0/1     Completed   0          50m

            valid-viomachineset-manager1-ch2b8tx5jb-w7mpk                     0/1     Completed   0          50m

            Each pod mentions 'unable to sync key' of various types. I verified that the upgrade.tar.gz was in the VIO content library.

            • 3. Re: VIO 6 Upgrade
              xiangfeiz Enthusiast
              VMware Employees

              Did you run 'kubectl -n openstack apply -f restore.yaml' this time?

               

              If so, please run 'kubectl -n openstack get restorationrequests' to see if a restorationRequest CR exists.

               

              If so, check logs of rnb-controller-xxx pod for any errors.

              • 4. Re: VIO 6 Upgrade
                jet1981 Novice

                Yes I did run 'kubectl -n openstack apply -f restore.yaml' again with the same negative effect.

                 

                I checked for the restorationReqest CR and it was present. No matter what i do the cluster deployment doesn't kick off.

                 

                I have deleted the VIO 6 vapp and started over from scratch several times now each time going exactly by the procedure.

                 

                There are no overt errors in rnb-controller, it only says '{"state":"WAITING FOR CONTROLLERS","conditions":null}'. Which makes sense, since the controllers don't build.

                • 5. Re: VIO 6 Upgrade
                  xiangfeiz Enthusiast
                  VMware Employees

                  In update 2, you said 3 controllers were built. So each run may reach different state. If you see controller VMs present from vCenter, you can check 'kubectl get nodes' to see if the controllers join K8s cluster successfully. If the time on controller VMs are not synced, they will not be able to join K8s cluster.