Hello,
i have been trying to perform cluster installation of vro 8.0.1, i did in my opinion everything by the documentation . (yesterday i did 7.6 cluster installation, that worked just fine).
Anybody saw maybe those error messages ?
it ends basically with this
Failed to execute helm upstall command. Will retry in 10 seconds...
+ do-helm-upstall postgres enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs prelude
+ local DEPLOYED_STATUS=1
+ local name=postgres
+ local values=enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs
+ local namespace=prelude
+ local extra_values=
+ local extra_flags=
++ helm status -o json postgres
++ jq -r .info.status.code
+ local status=4
+ [[ 4 != \1 ]]
+ echo 'purging release: postgres'
purging release: postgres
+ helm delete --purge postgres
release "postgres" deleted
+ local 'cmd=helm upgrade --install --timeout=1800 --wait --namespace prelude'
+ [[ -n enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs ]]
+ cmd='helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string '\''enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs'\'''
+ [[ -n '' ]]
+ [[ -n '' ]]
+ cmd='helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string '\''enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs'\'' postgres postgres'
++ expr 12334 % 10 + 1
+ rnd=5
+ echo 'sleeping for 5 before upstalling postgres'
sleeping for 5 before upstalling postgres
+ sleep 5
+ echo 'running: helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string '\''enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs'\'' postgres postgres'
running: helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string 'enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs' postgres postgres
+ eval helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string ''\''enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs'\''' postgres postgres
++ helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs postgres postgres
Release "postgres" does not exist. Installing it now.
purging release: vco
release "vco" deleted
sleeping for 7 before upstalling vco
running: helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string 'enableHttpProxy=,enableResourceLimits=true,kernel.isCustomized=true,preferLocalEndpoints=true,cluster.replicaCount=3,enableTelemetry=false,FQDN=vrolb80.greg.labs,INGRESS_URL=https://vrolb80.greg.labs,auth.provider=basic,redirectToHomePage=true' vco vco
Release "vco" does not exist. Installing it now.
+ sleep 10
Error: release vco failed: timed out waiting for the condition
helm failed to upgrade 'vco' in namespace 'prelude'
i have created 3 instances: vro5.greg.labs/vro6/vro7 - vro5 is the master.. I have loadbalancer on vrolb80.greg.lab, member pools with 443 ports. each node can ping/nslookup all parties, lb.
On each of the nodes before doing anything i have applied the KB for the account passwords.
Any idea if i am doing something wrong/misconfigured anything, or something crashed / bug in the software?
Thank you.
i have found the issue i think behind this behaviour.
i found inside vRA docs this :
If anyone from support reads this, can you please put this into the vRO documentation as well, as this is still valid for it ?
f5 side
I have installed a Kemp LoadMaster loadbalancer, and i can confirm that vRO 7.6 , 8.0.1 3-node setup in cluster works fast, without any issues. Once switched to f5 it behaves in not predictable way. Will leave this post in case somebody will be looking for the same issues.
Not sure how i managed to do it, but i did couple more attempts, and the cluster itself did install, there was no error at the end. IT's just that it is not reliable. sometimes it works, sometimes it stops working, behaves differently on different web browser.
Not sure how to troubleshoot the vro installation/configuration itself, there is nothing about it inside the documentation. Can be that this is the f5 issue, i have tripple checked it , and it's configured according to the documentation setup. Could anyone share his experience with vro working in cluster ? Did it slow down for you? When it's single installation it runs really really fast, once it joins the cluster suddenly its 10 x slower at least.
Im out of ideas now.
i have found the issue i think behind this behaviour.
i found inside vRA docs this :
If anyone from support reads this, can you please put this into the vRO documentation as well, as this is still valid for it ?
f5 side
I have installed a Kemp LoadMaster loadbalancer, and i can confirm that vRO 7.6 , 8.0.1 3-node setup in cluster works fast, without any issues. Once switched to f5 it behaves in not predictable way. Will leave this post in case somebody will be looking for the same issues.