VMware Cloud Community
LMcManamon
Contributor
Contributor

vRO 8.0/8.1 setup issue

Hello,

I am trying to set up an 8.0 or 8.1 vRO on our 7.0 cloud for testing purposes. This is all on a private network. After deploying the OVF template and powering on the VM, we are able to successfully do nslookups for the DNS name and the IP. There is a responsive ping as well for both. When I try to connect to the vRO through the browser with "https://FQDN/vco" , there is a delay and then a refuse to connect error. This error occurs on all different browsers. We've followed the instructions in the "Installing and Configuring VMware vRealize Orchestrator" guide and are unsure of what may be the issue. Any help or suggestions would be appreciated.

Luke McManamon

Reply
0 Kudos
10 Replies
iiliev
VMware Employee
VMware Employee

Hi,

A few things worth trying come to mind:

  • Which build number exactly do you deploy? I recall there were some issues related to expired passwords with some of the earlier 8.x builds, so I'd suggest to try with some recent 8.1 service pack builds.
  • Could you connect via SSH to do appliance and check if Kubernetes pods are up and running? Especially those in 'prelude' namespace.
Reply
0 Kudos
LMcManamon
Contributor
Contributor

The build number in the OVA file is is 8.1.0.9326.

I am able to connect through ssh. I have not worked with Kubernetes pods before, but when I do a "kubectl get pods" or "kubectl describe pods", it says there are no kubernetes pods. Is this the action you wanted me to do, and does the result seem abnormal?

Reply
0 Kudos
bschreze
Contributor
Contributor

Hi,

I am having exactly the same issue, but i am working with vRO 8.2 in a whol new environment.

My first thought was, that something is wrong with the new environment, blocked ports or something else.

Then I tried it in my companys lab and in my private lab at home.

What I verified:

- No ports are blocked on firewalls

- Access is not working even when I am in the same network without firewalls inbetween

- SSH access is working

- kubectl get pod and kubectl -n prelude get pod giving following message back

     W1020 12:40:36.744809   17870 loader.go:223] Config not found: /etc/kubernetes/admin.conf

     The connection to the server localhost:8080 was refused - did you specify the right host or port?

- nslookup is working

- tcpdump shows that https packets are comming in, but yes, the VRO containers are not getting started

Any ideas?

Benjamin

Reply
0 Kudos
bschreze
Contributor
Contributor

Hi,

i came a step further.
The first error I found in /var/log/bootstrap/firstboot was something with ntp.
I have used a coma separated list of servers during my first tries. After changing this to one single server, the deployment at least made this step with NTP.

Now I can see errors like following:

Running check eth0-ip
Running check non-default-hostname
Running check single-aptr
make: *** [/opt/health/Makefile:36: single-aptr] Error 1
make: Target 'firstboot' not remade because of errors.

These messages are comming up at least arround ten of tifteen times and then the script ends.
+ res=124
+ echo 'Script /etc/bootstrap/firstboot.d/00-fix-firewall-ports failed, error status 124'
Script /etc/bootstrap/firstboot.d/00-fix-firewall-ports failed, error status 124
+ exit 124
 
Still no k8s pods created.
 
Regards

 

 

SRunning check eth0-ip
 
Running check non-default-hostname
 
Running check single-aptr
make: *** [/opt/health/Makefile:36: single-aptr] Error 1
make: Target 'firstboot' not remade because of errors.
Reply
0 Kudos
iiliev
VMware Employee
VMware Employee

Hi,

I think this error means that DNS server resolves two or more hostnames for the appliance's IP address, while just one is allowed.

Reply
0 Kudos
MPeterson_MN
Contributor
Contributor

I am also getting the same error as Luke described. I am also on a private network. I was able to deploy OVF template with no issues. After powering up VM, I could also do a successful nslookup for DNS name and IP as well as pings for FQDN name and IP. However, the minute I try to connect to the control center (https://FQDN/vco) there is a long wait until I finally get the error refuse to connect error. 

I have also logged into VM through Web Console. I was able to log into VM with root and PW setup in deployment. I did change PW to never expire. 

I would appreciate next steps....

Thanks, 

Mark Peterson

 

*** Update  1/18/2021 ***

I watched boot process for my vRO deployment. I am now getting the following error after successful deployment, when powering VM on for the first time.

Failed to start LSB: Guest OS initialization. 

A start job running for LSB: Failed to start LSB: 

Reply
0 Kudos
sanjeevnewar198
Contributor
Contributor

Hello,

I am observing same issue with vRO build 8.4.2.

Do we have any update on this issue?

Thanks,

Sanjeev

Reply
0 Kudos
izzetcanyc
Contributor
Contributor

Same issue here 😑 Is there any solution ?

Running check nodes-count make: [/opt/health/Makefile:50: nodes-count] Error 1 Running check fips make: [/opt/health/Makefile:97: fips] Error 1 make: Target 'deploy' not remade because of errors

Tags (1)
Reply
0 Kudos
romansf
Contributor
Contributor

been struggling with the same issue. found out that NTP server should be ONE!!, not a list of servers, like nameservers.

 

Reply
0 Kudos
sharmad89
Contributor
Contributor

Have the same issue with vRO 8.9. Firstboot.log says "Couldn't reach NTP Server. No Response received from  ". But I'm able to ping the NTP server IP from the appliance.

Reply
0 Kudos