I am trying to set up an 8.0 or 8.1 vRO on our 7.0 cloud for testing purposes. This is all on a private network. After deploying the OVF template and powering on the VM, we are able to successfully do nslookups for the DNS name and the IP. There is a responsive ping as well for both. When I try to connect to the vRO through the browser with "https://FQDN/vco" , there is a delay and then a refuse to connect error. This error occurs on all different browsers. We've followed the instructions in the "Installing and Configuring VMware vRealize Orchestrator" guide and are unsure of what may be the issue. Any help or suggestions would be appreciated.
A few things worth trying come to mind:
The build number in the OVA file is is 184.108.40.20626.
I am able to connect through ssh. I have not worked with Kubernetes pods before, but when I do a "kubectl get pods" or "kubectl describe pods", it says there are no kubernetes pods. Is this the action you wanted me to do, and does the result seem abnormal?
I am having exactly the same issue, but i am working with vRO 8.2 in a whol new environment.
My first thought was, that something is wrong with the new environment, blocked ports or something else.
Then I tried it in my companys lab and in my private lab at home.
What I verified:
- No ports are blocked on firewalls
- Access is not working even when I am in the same network without firewalls inbetween
- SSH access is working
- kubectl get pod and kubectl -n prelude get pod giving following message back
W1020 12:40:36.744809 17870 loader.go:223] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?
- nslookup is working
- tcpdump shows that https packets are comming in, but yes, the VRO containers are not getting started
i came a step further.
The first error I found in /var/log/bootstrap/firstboot was something with ntp.
I have used a coma separated list of servers during my first tries. After changing this to one single server, the deployment at least made this step with NTP.
Now I can see errors like following:
Running check eth0-ip
Running check non-default-hostname
Running check single-aptr
make: *** [/opt/health/Makefile:36: single-aptr] Error 1
make: Target 'firstboot' not remade because of errors.
+ echo 'Script /etc/bootstrap/firstboot.d/00-fix-firewall-ports failed, error status 124'
Script /etc/bootstrap/firstboot.d/00-fix-firewall-ports failed, error status 124
+ exit 124
I am also getting the same error as Luke described. I am also on a private network. I was able to deploy OVF template with no issues. After powering up VM, I could also do a successful nslookup for DNS name and IP as well as pings for FQDN name and IP. However, the minute I try to connect to the control center (https://FQDN/vco) there is a long wait until I finally get the error refuse to connect error.
I have also logged into VM through Web Console. I was able to log into VM with root and PW setup in deployment. I did change PW to never expire.
I would appreciate next steps....
*** Update 1/18/2021 ***
I watched boot process for my vRO deployment. I am now getting the following error after successful deployment, when powering VM on for the first time.
Failed to start LSB: Guest OS initialization.
A start job running for LSB: Failed to start LSB: