aghosh
Contributor
Contributor

vRA 8.2 Error- Failed to start VA logging and external logging integrations via fluentd

vRA 8.2 keeps rebooting with message- failed to start VA logging and external logging integrations via fluentd. See 'systemctl' status fluentd.service for details.

Tags (1)
0 Kudos
5 Replies
Safos
Contributor
Contributor

HI

I have the same error with vRA 8.6 ,  dod you find the  solution ?

 

Regards

Sofiane

 

0 Kudos
BradCalvertLPNT
Contributor
Contributor

Same problem here, all we did was shut down VRA 8.2 using LCM, expand each node's memory from 40 GB to 42 GB per the 8.4 upgrade guide, and power on VRA from LCM.

The primary node is doing this, the other two are booted to their blue screens.

I do not like how fragile VRA is 😞

0 Kudos
Safos
Contributor
Contributor

Hi

Thanks for update  update , my  vRA  has 42 GIB   and my problem is that I can not  enter to the web interface  neither  with port 443 not 5480 as explained in the VMware website 

Regards

Sofiane  

0 Kudos
paul_xtravirt
Expert
Expert

When I have seen that before, it is because it didnt shut down properly.

Can you ssh into the primary vRA node and then run the following command?

kubectl get pods –all-namespaces

I am assuming you wont see anything.

I would recommend shutting down and the restarting properly as per the guide linked below. I never rely on LCM to restart my vRA8 instances tbh. SSH in and follow the shutdown process linked below. Then restart the first node and let it boot fully (it can take a 'long' time). Then follow the startup instructions.

vRA 8.x Cheat Sheet (automationpro.co.uk) - see shutdown and start-up section.

If you found this helpful, please consider awarding some points
Safos
Contributor
Contributor

Hi

yes it should be because of that ,  Now  after following  the instruction how  to shut  down and start the appliance officially thanks for the link it   gives me the below output of the  kubectl get pods –all-namespaces

kube-system command-executor-t8xr9 1/1 Running 3 4d20h
kube-system coredns-2qskk 1/1 Running 3 4d20h
kube-system etcd-vra.cloudz.local 1/1 Running 3 4d20h
kube-system health-reporting-app-85hvf 1/1 Running 3 4d20h
kube-system kube-apiserver-vra.cloudz.local 1/1 Running 3 4d20h
kube-system kube-controller-manager-vra.cloudz.local 1/1 Running 7 4d20h
kube-system kube-flannel-ds-hbvxg 1/1 Running 3 4d20h
kube-system kube-node-monitor-9kjkl 1/1 Running 3 4d20h
kube-system kube-proxy-f7xwt 1/1 Running 3 4d20h
kube-system kube-scheduler-vra.cloudz.local 1/1 Running 7 4d20h
kube-system kubelet-rubber-stamp-zqvg5 1/1 Running 3 4d20h
kube-system metrics-server-8g4jv 1/1 Running 3 4d20h
kube-system network-health-monitor-6vjm9 1/1 Running 3 4d20h
kube-system predictable-pod-scheduler-55dqk 1/1 Running 3 4d20h
kube-system prelude-network-monitor-cron-1643113620-96l5h 0/1 Completed 0 6m55s
kube-system prelude-network-monitor-cron-1643113800-57whl 0/1 Completed 0 3m56s
kube-system prelude-network-monitor-cron-1643113980-h7rbp 0/1 Completed 0 56s
kube-system state-enforcement-cron-1643113800-4xl8t 0/1 Completed 0 3m56s
kube-system state-enforcement-cron-1643113920-crwbq 0/1 Completed 0 116s
kube-system tiller-deploy-669848bc84-6wwn6 1/1 Running 3 4d20h
kube-system update-etc-hosts-9tm4w 1/1 Running 3 4d20h

It seem that the pods are running  but I still  I can't access to the web interfaces , Neither port 443 nor  on port 5480, here are the output of listing ports Safos_0-1643114335653.png

Regards

Sofiane 

0 Kudos