VMware Cloud Community

vRA 8.6 Open Ansible - UNREACHABLE! when running the playbook from vRA


Can someone please help with the following. We have integrated vRA 8.6 with Open Ansible. The integration works well and validation is good. We have followed the VMware integration article. https://docs.vmware.com/en/vRealize-Automation/8.6/Using-and-Managing-Cloud-Assembly/GUID-9244FFDE-2...

We have also validated that the Ansible controller is capable of running playbooks directly from the controller to any VM in the inventory (linux and Windows), however when we use Ansible as part of the cloud template the VM is created but we always get the following error when it gets to the Ansible part. 

TASK [Gathering Facts] *********************************************************,fatal: [test14]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: ", "unreachable": true},,PLAY RECAP *********************************************************************,test14 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0 Refer to logs at var/tmp/vmware/provider/user_defined_script/81e21438-52db-41ca-9f6e-765bba5cbd2f/ on Ansible Control Machine for more details.

After some troubleshooting I think that this might be related to the Ansible controller not being able to reach the VM that was created (in the example is test14). The Ansible controller doesn't have any reference of test14 in the inventory or ansible host file so I am guessing it can't get to it to run the playbook (this is just a guess). I look at logs in the controller (user defined), the only two logs with data are show below:


PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
fatal: [test14]: UNREACHABLE! => {"changed": false, "msg": "Invalid/incorrect password: ", "unreachable": true}

PLAY RECAP *********************************************************************
test14 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0


[2022-03-06T21:00:28.355Z] Running create operation on host test14
[2022-03-06T21:00:28.357Z] Script arguments:
node_uuid = 81e21438-52db-41ca-9f6e-765bba5cbd2f
flock_wait_timeout = 15
max_connection_retries = 4
ansible_groups = lin
keep_failed_host_in_inventory = false
operation = create
node_host =
node_user = root
use_sudo = false
node_host_name = test14
node_previous_host_name =
ansible_inventory_path = /home/ansible/inventory
provisioning_playbook_paths = /home/ansible/git-setup.yml
node_password = ***REDACTED***
[2022-03-06T21:00:28.361Z] Use Sudo from integration account: false
[2022-03-06T21:00:28.363Z] script path var/tmp/vmware/provider/user_defined_script/81e21438-52db-41ca-9f6e-765bba5cbd2f
[2022-03-06T21:00:28.364Z] Checking if the inventory file '/home/ansible/inventory' exists
[2022-03-06T21:00:28.365Z] current hostname is test14
[2022-03-06T21:00:28.366Z] previous hostname is
[2022-03-06T21:00:28.372Z] Adding test14 to inventory /home/ansible/inventory under group lin
[2022-03-06T21:00:28.374Z] Obtained an exclusive lock on /home/ansible/.vra_inventory_lock_inventory to update inventory
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the
controller starting with Ansible 2.12. Current version: 3.6.8 (default, Nov 16
2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]. This feature will be
removed from ansible-core in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
localhost | CHANGED => {
"changed": true,
"gid": 1000,
"group": "ansible",
"mode": "0674",
"msg": "option added",
"owner": "ansible",

The integration runs was created and it's successful to validation with the user "ansible". The cloud template/blueprint (also runs with the ansible user)

type: Cloud.Ansible
authentication: usernamePassword
inventoryFile: /home/ansible/inventory
username: ansible
password: password123
- lin
- /home/ansible/git-setup2.yml
osType: linux
maxConnectionRetries: 4
account: Ansible Control Machine DEV
host: '${resource.Cloud_vSphere_Machine_1.*}'

Also we've created an ansible_vault_password_file.txt file which is placed in /etc/ansible/ with the same password. The ansible user password is the same everywhere.

As I mentioned when logging with the ansible user in the controller and setting the invetory with an IP address of one of the VMs that's created by vRA I am able to run the same playbook.  

I am not sure how to proceed any help would be much appreciated. 

Thank you so much.

0 Kudos
1 Reply

Good afternoon. I was just curious if you ever found a fix for this issue? We are having the same issue and have tried everything to figure it out!! Any help would be great!!



0 Kudos