i think i do not understand how clustering works in vRO 7.6, or i am doing something wrong with configuring , i read the manual couple of times and still nothing.
I have 3 nodes : vro761,vro762,vro763, in which the vro761 is the lead. The load balancer is configured with : vrocluster7.greg.labs
If i open the vrocluster7.greg.labs webiste, i can be served via vro763.greg.labs for example, but the link on the page to the client shows always the lead i have chosen at the setup stage. So it points to vro761. This works as long as the vro761 is up and running.
If the vro761 goes down, then i can't open the html5 client since the 761 is down. If i would type url myself and type: vro762.greg.labs:8281, the link to the client still points to 761 the leader.
It is not like in the vro8, where the actual link always points to the LoadBalancer FQDN.
Are we supposed to change the FQDN of the host in the control center on the leader before joining the other nodes to the leader ? In 8.0.1 documentation there is a step like this, but not in 7.6 documentation.
I don't know, maybe i think of the clustering here in the wrong way. Why do we need in this case the load balancer ?
I also noticed that if i would use the old java client, i can always use vro762,vro763,vro761 how i want to connect, will always works. Also, i can use there the LoadBalancer FQDN , and LB will always forward the java client to the working node, and that works ok.
At the above screenshot, the cluster in sync mode has noticed that node 761 is down, and he moved master role to 762. But all the links are pointing to the 761 node. So when i open the website vrocluster7.greg.labs and click the client html5 link i get : website not reachable.
So is there some explanation for this behavior ? Is it by design that it is not possible to use any other html5 client besides the leader after cluster is formed , or i am doing something wrong here?
For the control center this is not happening, everything there is working using LB FQDN: vrocluster7.greg.labs is being served via vro762, while vro761 is actually down. I can still click the link to control center i will be served website content.
One other question, have you noticed that the VAMI certificate has doubled domain strings ?
IF you will check the main website, its certificate is in correct format. I have installed it using the OVA import. The normal / the beggining website for client or at 8281, has properly CN value.
I also have made a video from screen while creating the cluster to log what i have done, some you find some error in what i am doing.
One other question, is it normal in case you delete a node that is still being present in the control center? I have rebooted all nodes, and contorl center still thinks that there are 3 nodes.
Hello Alessandro, i read this guide while doing the LB setup. There is nothing much explained besides i think the part at the bottom that states:
Accessing Orchestrator Client in HA mode Load balancing of the Orchestrator client UI (TCP ports 8286, 8287) is not supported due to technical limitations. You should access the client UI on each node directly. If you use the Orchestrator client via load balancer, you may see incomplete or incorrect data.
so "You should access the client UI on each node directly. " -> I mean i can't do it, as per screenshots that always point to the master node via links.
But this is for older version. So not sure if this is same in 7.6 nor this document would explain my doubts/questions, like is it normal that is master is down nobody can open vro html5 client.
For 7.6 i was following the 7.6 documentation. i mentioned about 8.0 since i saw it, and know that is configured in proper way there. I am not sure, how it was supposed to work, there is nothing inside the documentation about it , or i can't find it, and there is not much information on the internet either or at least i can't find it, i spent quite some time on it already.
In case i will 'misconfigure' it, i mean i will not follow the procudure for 7.6 and do :
I would actually connect to the leader node 761, and instead joining 762 and 763 to 761, i will first go to contorl center of 761, and change the hostname to the LB fqdn.
After 761 hostname in control center is changed and is pointing to the vrocluster7.greg.labs, then i would join other nodes, when joining, i would use vro761.greg.labs as host to join to.
notice that link is now correct for the html5 client.
and that i am being load balanced as well.
My other desktop gets different vRO server to handle his connection
And in case a node will die , for exmpla 761, then i can still open the website url: vro761/2/3 for example, because the link to the client point to the LB FQDN. And loadbalancer knows which node is up or down, and will not direct me to a node that is down at the moment.
The documentation does not say to do it this way, at least i dont see that, so i wonder am i supposed to do it this way or i am going to end up with some problems that i am not aware later on.
I don't have a vRA solution, i have just vRO that does some automation on a VC, and i wanted to have 2-3 vRO in case on dc would go down, then the request would go to to load balancer URL and one of the vRO nodes would pick it up and handle it. Was this designed this way so that we can use the html5 client ? i doubt that somebody forgot to write something in documentation, probably its me that is missing something and i just don't see it.
As you have already written, the documentation is not very satisfactory, also because it also puts VRA (which you don't have) in the middle. For this reason, you can write to support. Perhaps support provides you with more specific documentation.