MC1903
Enthusiast
Enthusiast

Multiple concurrent vRA Installations to the same vCenter Server? (Non-Prod / Lab Env)

Jump to solution

Hi,

Is it possible to have multiple concurrent instances of vRA (different versions) configured into the same vCenter Server?

I am in a lab environment and am trying to get to grips with vRealize Suite setup/initial configuration.

I used vRSLCM v8.6 to successfully deploy (after a few attempts) vIDM & vRA, using the Easy Installer. I have then used vRSLCM to install vRLI and vROps (again both v8.6). All working great.

I wanted to try a vRSLCM managed upgrade, so I downloaded vRSLCM v8.3 and started a new Easy Installer installation, into the same vCenter Server, using new VM names/hostnames/etc.

It has been installing for a few hours (much longer than the v8.6 took) and I just got a vRA installation failure message during stage 6 (initialize vRealize Automation); but when I checked the requests queue the error cleared itself and it is back as 'in progress' at the same point in stage 6.

Can I have multiple vRA clusters configured to the same vCenter Server?

Thanks

M

EDIT:

Typical... I just after pressing submit here it failed again:

This is a single vRA node installation 😕

MC1903_0-1637679284814.png

Error:

"

Error Code: LCMVRAVACONFIG590003
Cluster initialization failed on vRealize Automation.
vRealize Automation virtual appliance initialization failed on mc-vvra-v-202a.momusconsulting.com. Login to vRealize Automation and check /var/log/deploy.log file for more information about the failure. If it is a vRealize Automation high availability and the load balancer has to be reset, set the 'isLBRetry' to 'true' and enter a valid load balancer hostname.

com.vmware.vrealize.lcm.common.exception.EngineException: vRealize Automation virtual appliance initialization failed on mc-vvra-v-202a.momusconsulting.com. Login to vRealize Automation and check /var/log/deploy.log file for more information about the failure. If it is a vRealize Automation high availability and the load balancer has to be reset, set the 'isLBRetry' to 'true' and enter a valid load balancer hostname. at com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaInitializeTask.execute(VraVaInitializeTask.java:126) at com.vmware.vrealize.lcm.plugin.core.vra80.task.VraVaInitializeTask.retry(VraVaInitializeTask.java:221) at com.vmware.vrealize.lcm.automata.core.TaskThread.run(TaskThread.java:43) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source)

 

"

 

 

 

 

0 Kudos
1 Solution

Accepted Solutions
MC1903
Enthusiast
Enthusiast

Tried 'disabling CEIP' as per https://kb.vmware.com/s/article/76053 but the service restart eventually got stuck in a loop waiting longer and longer each time for something to come ready.

Gave up and trashed the vRA deployment:

  • SSH'd into the vRA, as root, and confirmed the vco-app pod was in the CrashLoopBackOff state with kubectl -n prelude get pods
  • Crashed out the vRA install (powered the vRA VM off & deleted it).
  • SSH'd into the vRSLCM instance and gracefully restarted it.
  • Deleted the dead vRA environment from within vRSLCM
  • Recreated a new environment for vRA within vRSLCM
  • Added a certificate for vRA into the locker
  • Installed vRA into the newly created environment.
  • SSH'd into the vRA once it was up and monitored the pod install/startup with  watch -n 5 kubectl -n prelude get pods

34 minutes from start to finish.

Done!

View solution in original post

0 Kudos
1 Reply
MC1903
Enthusiast
Enthusiast

Tried 'disabling CEIP' as per https://kb.vmware.com/s/article/76053 but the service restart eventually got stuck in a loop waiting longer and longer each time for something to come ready.

Gave up and trashed the vRA deployment:

  • SSH'd into the vRA, as root, and confirmed the vco-app pod was in the CrashLoopBackOff state with kubectl -n prelude get pods
  • Crashed out the vRA install (powered the vRA VM off & deleted it).
  • SSH'd into the vRSLCM instance and gracefully restarted it.
  • Deleted the dead vRA environment from within vRSLCM
  • Recreated a new environment for vRA within vRSLCM
  • Added a certificate for vRA into the locker
  • Installed vRA into the newly created environment.
  • SSH'd into the vRA once it was up and monitored the pod install/startup with  watch -n 5 kubectl -n prelude get pods

34 minutes from start to finish.

Done!

0 Kudos