VMware Cloud Community
firefoxx04
Contributor
Contributor

Clustering VRO 7.3, having trouble.

Hi, let me start by saying that I have clustered vro 7.2 in the past without any issue. I am, however, having trouble with 7.3.

I have a vra 7.3 instance and a postgres database.

Database config:

- Database = vmware

- User = vro

- Granted all permissions on vmware to vro.

VRO hostnames

- vro1 (first ova deployment)

- vro2 (second ova deployment)

- vro (load balanced hostname)

Steps taken:

  1. Deploy vro 1
  2. Hostname = vro (not vro1)
  3. VRA authentication
  4. Connect vro1 to the external database
  5. Deploy vro 2
  6. Choose "Clustered Orchestrator"
  7. Cluster to vro1 (not the load balanced hostname)

Once finished, Orchestrator Cluster Management only shows the first hostname in the cluster. vro2 is nowhere to be found in the cluster management page. Additionally, if I log into vro 2 directly, it still only lists vro 1 as being in the cluster.  Additionally, if I power off vro1, the load balancer forces me to connect to vro2, however, the web page fails to load. Is there something I am missing?

Tags (2)
Reply
0 Kudos
4 Replies
daphnissov
Immortal
Immortal

You must use the real hostname for the individual appliances and not the cluster name. So if vro.domain.com is your cluster name, you can't assign this to node 1 or node 2.

Reply
0 Kudos
firefoxx04
Contributor
Contributor

Each node has a unique hostname. The initial setup for VRO 7.3 requires that you input a hostname. It clearly states to use the load balancer's hostname / cluster hostname.

Step 3 from Configure vRealize Automation Authentication Provider

Click CHANGE to configure the host name on which Control Center will be accessible. If you are about to configure an Orchestrator cluster, enter the host name of the load balancer virtual server.

https://docs.vmware.com/en/vRealize-Orchestrator/7.3/vRealize_Orchestrator_Load_Balancing.pdf

This is also clearly stated on the setup page itself. If you access the nodes via SSH, they have unique hostnames.

Example:

vro 1 = vro-cl-a.mydomain.com

vro 2 = vro-cl-b.mydomain.com

load balancer hostname = vro.mydomain.com

The load balancer hostname is what I use during the initial setup of node A. I then configure vra auth and my external database. I reboot the system and ensure that "validate configuration" passes. Then I add the second node by choosing "cluster deployment" and join it using the hostname of vro 1 (vro-cl-a) not the hostname of the load balancer. The second node appears to join just fine but never appears in the "Orchestrator Cluster Management". Additionally, I cannot seem to figure out which node I am actually on without digging through my load balancer's logs.

Reply
0 Kudos
daphnissov
Immortal
Immortal

You need to use the real name on the first node and not the load balanced name. See the install and config guide here (PDF) page 36.

pastedImage_1.png

Essentially, you set up the first node as a standalone instance as if you never intend to cluster it. The only difference is on the second node. There, you simply input the real name of the first node and it pulls over the config. Once complete and the second node's server restarted, the cluster should form. You should then be barred from logging in directly to the second node in the control center. If you're doing active-standby, the load balanced address needs to have the proper scheduling method configured appropriately.

I set this up in my lab to show you including a live load balancer.

Node 1 (vrotest01):

pastedImage_2.png

Cluster Settings:

pastedImage_3.png

The cluster is formed and vrotest01 is the active node. The load balanced name is "vrotest".

LB configuration (single-armed mode):

pastedImage_4.png

Status:

pastedImage_5.png

Logging into the cluster name:

pastedImage_6.png

Tools -> Trusted Publishers to prove vrotest01 is active as shown in the self-signed cert:

pastedImage_7.png

Reply
0 Kudos
daphnissov
Immortal
Immortal

Realized I didn't change the hostname over after I joined the cluster.

pastedImage_0.png

Also, make sure your certificate has the cluster name and not the individual node names. Once you change the hostname, you'll have to reconfigure the auth provider, and if the cert doesn't have that name in it you'll get an error.

Updated LB configuration with correct health check strings and distribution method:

pastedImage_0.png

pastedImage_1.png

pastedImage_2.png

So the overall steps I took to configure this is to setup node one with the real name (not VIP name), setup node two and join to node one, change the cluster name, change the certificate, and change the auth provider.

Reply
0 Kudos