Skip navigation


vRealize Automation relies on a blueprint concept for offering catalog services. This means that the service is widely defined in the catalog and users just request the pre-defined services. Although there’s options for parametrization and deployment type (template, unattended installation …) some users still might want to do their custom installation of a VM. To do this they need a way to attach an ISO to a VM provisioned by vRA as well as there needs to be a method to upload an ISO of their choice. Both is not available out-of-the-box in vRA but can be implemented with small customizations.


This article describes how to

  • Leverage a central ISO storage
  • Provide upload web page for ISOs
  • Integrate mount and unmount day-2 operations
  • Create example blueprint with day-2 operations entitled






This article just gives an example and starting point on how this requirement can be achieved. It’s not intended to provide a “water-proof” solution nor it leverages all capabilities PHP technology provides.



Preparation of central ISO storage


vSphere typically uses a central shared datastore as ISO repository. This can be of any supported type like iSCSI, Fibre Channel or NFS. For this use case the best way to go is leveraging an NFS share. While mounting and ISO browsing would work for any datastore, it’s much easier to upload files to a file service rather than block storage.

Therefor as first step you should provide an NFS share which has write permissions from the web server we will discuss later and appropriate permissions from the ESX hosts.

This NFS share must to be added as datastore to all ESX hosts where VMs with mounted ISOs should be used.


At this point I won’t go into details on how this is done. Please refer to vSphere documentation.






Create upload server for ISOs


Now we need to set up an upload server which hosts the web page where image upload can be invoked by the user. vRealize Automation itself does not provide this capability. Also, it’s hard to use vRealize Orchestrator workflows directly for this task as Orchestrator expects the source files to be hosted on the Orchestrator VM itself rather than on a directory on the client PC.


The easiest way to create an upload server is by leveraging a linux web server for this task. In this example I am using a CentOS 7 based VM, but the procedure in general should work for any linux running apache and PHP.


Setting up web server on linux


The basic procedure for the upload server setup is described here:


In this example slight modifications are used. The configuration below e.g. checks if files have been uploaded before and only accepts iso files.


Following steps must be performed:


Install apache on linux


yum install httpd


Make sure apache starts on VM start


chkconfig httpd on


Install PHP (might be more packages than actually needed)


yum install php php-mysql php-devel php-gd php-pecl-memcache php-pspell php-snmp php-xmlrpc php-xml


Modify /etc/php.ini


The values are self-explaining and can be modified to individual needs.


memory_limit = 64M

upload_max_filesize = 8000M

post_max_size = 8000M

file_uploads = On


Create file /var/www/html/index.html


<!DOCTYPE html>




<title>VMware vRealize Automation ISO Upload</title>

<h1>VMware vRealize Automation ISO Upload</h1>


<form action="upload.php" method="post" enctype="multipart/form-data">

    Select ISO image to upload:

    <input type="file" name="fileToUpload" id="fileToUpload">

    <input type="submit" value="Upload Image" name="submit">



body {

    background-color: #3989C7;

    background-repeat: no-repeat;

    background-position: center top;

    color: white;

    font-size: 150%;




Create file /var/www/html/upload.php



$target_dir = "uploads/";

$target_file = $target_dir . basename($_FILES["fileToUpload"]["name"]);

$uploadOk = 1;

$imageFileType = strtolower(pathinfo($target_file,PATHINFO_EXTENSION));

// Check if file already exists

if (file_exists($target_file)) {

    echo "Sorry, file already exists.";

    $uploadOk = 0;


// Allow certain file formats

if($imageFileType != "iso") {

    echo "Sorry, only ISO files are allowed.";

    $uploadOk = 0;


// Check if $uploadOk is set to 0 by an error

if ($uploadOk == 0) {

    echo "Sorry, your file was not uploaded.";

// if everything is ok, try to upload file

} else {

    if (move_uploaded_file($_FILES["fileToUpload"]["tmp_name"], $target_file)) {

        echo "The file ". basename( $_FILES["fileToUpload"]["name"]). " has been uploaded.";

    } else {

        echo "Sorry, there was an error uploading your file.";






Create uploads folder and set proper permissions


mkdir /var/www/html/uploads

chown apache:apache /var/www/html/uploads



Mount ISO NFS share to uploads folder


mount -t nfs <server IP or hostname>:/<path-to-iso-folder> /var/www/html/uploads



Restart apache services after changes


service httpd restart



After all required steps have been performed, the upload page as below screenshot shows should appear when pointing your browser to the web server.





When a local file has been selected for upload it will first be loaded into the linux OS memory and then stored in the uploads folder on the NFS server. To optimize performance, it might be required to tune the memory_limit parameter in /etc/php.ini.




Create Day-2 operations


The actual mount and unmount tasks are performed by related Orchestrator workflows to be invoked by day-2 operations.

As pre-requisite in Orchestrator the vCenter plugin must be configured properly allowing the workflows to scan the datastores.


Following steps must be done to get the workflows configured.


Import Orchestrator package


Import Orchestrator package com.vmware.custom.isomount.package (attached to this blog)


Modify workflow “Mount ISO”


Edit Workflow and select the NFS datastore added previously through vCenter plugin.




Set datastorepath attribute. If root folder of NFS share is used, insert the datastore name into the field.




Save workflow


Add custom resources in vRA


Go to Design --> XaaS --> Resource Actions and create a new one.


Select “Mount ISO” workflow from proper folder.




Keep defaults in next page





In next page remove description and keep other values.





In Form tab drag a new field of type “Link” from left to right and place it above the “select ISO file to mount” field.




Modify field constraints to define Value --> Constant --> <URL of the web server>

Click “Apply” and “Finish”


Publish the new resource action


Do same steps for the Unmount ISO resource action, however no modification of form page is required.



Create example blueprint


The above-mentioned day-2 operations for mount and unmount can be used for any entitled blueprint. Depending on the VM configuration however it might be required to modify the workflows as CD-ROMs to add ISO reference might be identified differently (different identifier numbers or types) depending on VM configuration.

In this example we use an empty VM blueprint which expects VM installation to happen based on the ISO mounted.


Create empty VM blueprint


Create new blueprint in vRA through design tab.




Specify “create” action and “BasicVmWorkflow” in build information tab.




Specify desired VM parameters and in specific disk size. Storage maximum must be at least the desired disk size.




Add disk with proper size on Storage tab





Specify custom property with operating system type




In vSphere it’s required to specify the OS type of a VM during VM creation. To do this you have to set the mentioned custom property in vRA. Find a list of guest os identifiers here:


In this example we are using a Windows 2012 server guest. For production use of this configuration it might be required to define an OS selection for the user on request page or choose a generic OS type and hard code it into the blueprint.


As last step you need to publish the blueprint and create a proper entitlement.




It’s recommended to use VMRC console for full VM installation purpose. VMRC is much easier to handle for this use case compared to web remote console. This especially translates to better keyboard mapping for non-US keyboards.



Final test


After proper entitlement a new catalog item should appear in the catalog of the entitled user. On request the virtual machine is provisioned. When the provisioning process has finished, the defined day-2 operations are available on the VM object.




The “mount ISO” operation provides a link to the upload page and shows all available ISOs in a dropdown field. On selection of an ISO and click on submit button, the iso will be mounted automatically to the VM.




Users must wait until the mount process is finished until they can carry out other day-2 operations. After that they can use the VMRC day-2 operation to manage and install the VM based on the ISO. If VMRC is not installed on the client, users can use the link that is presented to download VMRC and install it. The VM must be power cycled to start the ISO boot process.




Many customers are leveraging configuration management tools like Ansible to do automated configuration management as well as application deployment in their infrastructure. vRealize Automation is perfectly suited to manage the underlying infrastructure layer and leverage Ansible on top for the application deployment and control.

There’s many good articles and blogs which explain in-depth how Ansible can be integrated with vRA like this one here

When I read these blogs I however had a more dynamic way in my mind for this type of integration. In a nutshell the idea has come up to place Ansible playbooks in a github repository, read them out through vRA and present them in a dropdown. Now the user should be able to select the proper playbook and deployment is automatically taking care about the actual installation through vRA and Ansible.

How does this sound?





I need to mention at this point that I am not an Ansible expert nor somebody who has in-depth experience in writing best practice orchestrator workflows. This document is all about creating a demo case which works and shows the potential, it’s not intended to provide full guidance for a productive deployment.





Ansible installation


To do a proper Ansible deployment an Ansible server is required. In my case I used a simple CentOS VM where I installed Ansible. There’s a lot of documentation on how to setup an Ansible server in the web, so I’ll only explain the relevant bullet points here

  • Install Ansible with “yum install ansible”
  • Modify /etc/ansible/ansible.cfg
    • uncomment “disable host_key_checking”
    • This is to make sure that a new host does not need to be added to the known_hosts file for proper SSH communication


The Orchestrator workflow used for this demo will download playbooks to /root/ansible-playbook-downloads. It will then create a temporary inventory file with “ansible-<IP>” notation and execute “ansible-playbook -i /root/ansible-<IP> /root/ansible-playbook-downloads/<playbookfilename>” going forward.


I am aware that Ansible is leveraging /etc/ansible/hosts inventory in most of the productive cases. To ease deployment for this demo case I decided to use the “-i" option which allows to specify an inventory file containing the target server IPs. With some effort the scripts obviously can be changed to modify hosts file properly.


Ansible playbooks


In my demo the ansible playbooks are stored on my github respository. There’s rather simple playbooks used for the initial use cases:




- hosts: all

  remote_user: root



  - name: httpd is installed

yum: name=httpd state=installed


  - name: httpd is running and enabled

service: name=httpd state=started enabled=yes




- hosts: all

remote_user: root



  - name: mariadb is installed

yum: name=mariadb-server state=installed


  - name: mariadb is running and enabled

service: name=mariadb state=started enabled=yes


SSH Key handling


Ansible is communicating through SSH protocol with its target hosts. For this it’s helpful having certificate-based authentication in place. For this demo I created an ssh-key on the Ansible server:


ssh-keygen -t rsa


After that the public key file /root/.ssh/authorized_keys file must be copied to the vRA/vRO server. If you don't want to change the defaults in the vRO workflow you should use the path mentioned here. Otherwise feel free to adapt it.


scp /root/.ssh/authorized_keys <vRO-host>:/etc/vco/app-server/data/authorized_keys


Login to the vRO server and set proper ownership:


chown vco:vco /etc/vco/app-server/data/authorized_keys



There’s an Orchestrator workflow in the package you are about to import below which includes a workflow that copies the above-mentioned file from Orchestrator server to the deployed target VM prior to executing the Ansible workflows.



Preparation in vRealize Orchestrator

Import Orchestrator Workflows


The Orchestrator workflows developed for this use case do leverage the guest script manager package for Orchestrator. Therefore, you have to download it in advance and import it into Orchestrator from this link:


Afterwards import the Orchestrator workflow package attached to this article. The package includes a high number of dependent workflows and actions which have to be imported or present before, but the most important one’s for this use case are:

  • Workflow “Copy SSH Key”
    Makes sure that the public SSH key is copied to the target VM enabling Ansible to start communication.
  • Workflow “Run Ansible Script on VM”
    Downloads the Ansible playbook from GIT and runs the proper Ansible commands on the server to execute a playbook on the target VM. It leverages a script configuration resource also provided with this Orchestrator package.
  • Action “getGITFilenamesbyREST”
    Reads filenames which includes yml in name from GIT repository.
  • Action “getGITDownloadURLbyREST”
    Retrieves GIT Download URL for filename provided.

Registration of GIT REST Host


The workflows and actions issue REST calls to the github server. As these processes use a HTTPS connection, the related certificates must be imported to vRealize Orchestrator. To do this you have to run the “Add a REST host” workflow once pointing it to the github server:







Modification of Workflow Parameters


After successful import of the Orchestrator package you will find 2 new workflows in this directory.




The attributes of both workflows must be tailored to the target environment.



Most important values to change for “Copy SSH Key”:

  • password: root password of target VM (to copy SSH key)




Most important values to change for “Run Ansible Script on VM”

  • vm: Select VM where ansible has been installed
  • vmPassword: insert root password for Ansible VM
  • queryString: Modify repository URL if a different repository should be used




Preparation in vRA


Add Subscriptions in vRA


Actually there’s 2 subscriptions required in vRA to execute the imported vRO workflows, see screenshots below. It’s important to specify subscription priority to make sure that the workflows are executed in the correct order. Make sure that the subscriptions are published after creation.











Add Custom properties in vRA


There’s custom properties required to control the deployment process and add the playbook selection in the blueprint request form.

Adding playbook selection in request form


Create a new property definition as per screenshot below. Make sure you reference the Action “getGITFilenamesbyREST” as external script action and add the properties as needed:

  • baseURL:
  • httpMethod: GET
  • queryString: /repos/cferber/ansible-playbooks/contents (You can change this if different repository should be used. However you have to make sure it's changed accordingly in the attributes of the Orchestrator workflow above)




Adding property group for running SSH Key workflow




Adding property group for providing payload by EBS




Adding property group for running Ansible workflow





Prepare Linux Template in vRA


In this demo I did use a CentOS 7 template. A default installation of CentOS 7 should work straight away. I won’t explain how to create a CentOS 7 blueprint and publish it properly in this document, but this obviously will be required.

In addition it’s important to enable the property groups created above on the blueprint which makes sure that the workflows are executed at the right time in the provisioning process.






Final test and caveats


Now as you have configured everything properly you should be able to request a CentOS VM. The field “Select Ansible Playbook” should appear in the request form and your yml ansible playbook files available on github repository will be listed for selection.

If the playbook has been selected and request has been issued, deployment of VM should start and finish with application installed as per playbook selection.

If anything goes wrong, you have to leverage your vRA and vRO troubleshooting skills :-) I won’t go into detail on how to do this here.

One caveat I found was that there might be a race condition when multiple deployments run at the same time where one workflow might fail at the point where guest manager tries to modify files simultaneously. Manual re-run would work then however.

This is something that certainly could be solved. For the demo I would recommend only starting one deployment at a time.


Have fun!

vRealize Automation (vRA) provides several methods for provisioning virtual machines from blueprints. These include template based mechanisms as well as workflows leveraging unattended installation procedures to deploy a virtual machine operating system. As some customers do have existing deployment processes they want to leverage a variety of unattended installation methods ranging from SCCM, AutoYaST and RedHat kickstart to name some is supported by vRA.

This blog describes how to leverage kickstart mechanism to deploy a virtual machine in vRealize Automation using CentOS 6.4 x86 as operating system. The process has been tested with vRealize Automation 6.2.3 and version 7.


Be aware this blog is not intended to provide a full step-by-step guide and replace documentation. It furthermore covers the whole process in bullet points and highlights important pieces that are not clearly covered in documentation.

Architectural considerations


A deployment process in vRealize Automation (vRA) is based on a blueprint which is being cloned when requested by a user. The blueprint itself defines the methodology to be used deploying the virtual machine. In RedHat/CentOS kickstart case there’s a workflow called “LinuxKickstartWorkflow” which is leveraged. Kickstart in a nutshell is the default unattended installation method for RedHat Linux operating systems which is based on a kickstart description file.

Following general steps have to be taken to accomplish the task:


  • Preparation of a CentOS installation ISO which is modified to point to an externally hosted kickstart file. CentOS ISO is stored on a vSphere datastore.
  • External kickstart configuration file has to be created to define unattended installation parameters as well as invoke installation of vRA guest agent as last part of the installation. The kickstart file is stored on an external server (e.g. web server)
  • vRA guest agent files have to be stored on a network share (e.g. web server)
  • Blueprint custom properties have to be defined to e.g. point to the installation sources


Deployment process in high level:

  • User requests published blueprint
  • Virtual machine is created on vSphere with CentOS iso attached
  • VM is booted up and boots from attached ISO
  • ISO downloads kickstart file from location defined and runs full unattended installation
  • Last part of unattended installation will download guest agent files to virtual machine and install the agent into the virtual machine
  • After reboot guest agent is started automatically and reports “success” to vRA
  • Process is finished in vRA and VM can be managed in “items” view



Preparation of CentOS ISO


An existing CentOS ISO (e.g. downloaded from has to be prepared to include information where to find the kickstart file. There’s multiple ways to modify content of an ISO file, some of them are using commercial tools as most of the freeware tools do have a limit of 300 or 500MB in writing an ISO file. Due to that this document uses a standard linux operating system which provides the functionality for free and also guarantees it works. Follow these steps to modify the ISO:

  • Provide a web server which is reachable in your network
  • Copy the ISO file to the linux system
  • Loop mount the iso according to this description:
  • In “make your changes” section of above document edit the /var/tmp/linux/isolinux/isolinux.cfg file and add the red append parameters as mentioned below. Replace the <websrv> by the IP address of your web server and adapt the path and file name to your needs. Creation of the kickstart file is described in next section.



label linux

  menu label ^Install or upgrade an existing system

  menu default

  kernel vmlinuz

  append initrd=initrd.img --bootproto=dhcp ks=http://<websrv>/vra/ks.cfg


  • Save the file and create the ISO as per above link’s description
  • Copy the ISO to a vSphere datastore e.g. by coping it to a Windows system and uploading it to datastore from there




  • Note datastore name (in this case “VM-NFS-01”) and path to ISO file (in this case “/ISO/centos64-unattend.iso”)



Preparation of kickstart file and guest agent


The preparation of the kickstart file is decribed in vRA documentation, see here:


The file can simply be generated by a text editor. In this case we are using a slightly modified version compared to the documentation which works as well, see here:



auth --useshadow --enablemd5

bootloader --append="rhgb quiet" --location=mbr --driveorder=sda


clearpart --all --initlabel


firewall --disabled

keyboard us

lang en_US

logging --level=info

network --bootproto=dhcp --device=eth0 --onboot=on


rootpw secret

selinux --enforcing

timezone --isUtc America/New_York


part / --asprimary --fstype="ext3" --size=4096

part swap --asprimary --fstype="swap" --size=512





rpm -i http://<websrvip>/vra/gugent-6.2.2-05062015.i386.rpm

export AXIS2C_HOME=axis2

export PYTHONPATH=/usr/share/gugent/site/dops

echo | openssl s_client -connect <vra-iaas-srv-fqdn>:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /usr/share/gugent/cert.pem

cd /usr/share/gugent

./ <vra-iaas-srv-fqdn>:443 ssl


Replace all components marked in RED by the paths, IPs, server names tailored to your environment.

After successful modification name the file “ks.cfg” and store it on the appropriate web server path (referenced from isolinux.cfg in modified CentOS ISO file).

Store the guest agent rpm file on the web server as well in the appropriate path.



Preparation of the blue print


Create a new blueprint and use the Action “Create” as well as provisioning workflow “LinuxKickstartWorkflow”. Fill all other values according to your needs. In addition there’s some custom properties that have to be set to make kickstart installation work. Find an easy example here:


Image.ISO.Location = Datastore name where CentOS ISO resides

Image.ISO.Name = path and name of CentOS ISO file related to the root of the above mentioned datastore

VMware.VirtualCenter.OperatingSystem = operating system ID of the OS to be installed.


See full reference here:


After saving the blueprint, publishing and defining the appropriate entitlements you should be able to request a VM which gets fully provisioned by kickstart process.





ISO checksum wrong


If the ISO generation tool does not create proper ISO checksum you will notice this during boot of the virtual machine created. In early boot stage a related message will come up on the console.

If that’s the case use the recommended way to modify the ISO as per this documentation (using linux system).


Guest agent trouble shooting


One important step of the whole process is the successful installation of the guest agent. If this step is not done properly the whole process in vRA will stay in “in progress” status and wait for a time out (which will not occur in less than 24 hours). So it’s essential that the guest agent is installed and starts up properly after reboot to report “success” back to vRA.

Some points to look at

  • After deployment the guest agent should be installed. You can check this by running “rpm –qa | grep gugent”. This should show a gugent package with the appropriate version number. If the output is empty gugent has not been installed. In this case check the kickstart script for syntax errors in the “rpm –i" command and also check network configuration for DHCP which is leveraged during kickstart installation.
  • Guest agent communicates with the vRA IaaS Server in an encrypted way. To do this it has to retrieve the client certificate from the IaaS server and store it in a file. Using the default parameters in above configuration files a file called “cert.pem” should be stored under /usr/share/gugent. If it’s not there or its content is empty check availability of the IaaS server during VM installation and check kickstart file for correct syntax. In addition be aware that during installation the virtual machine uses a DHCP IP address. Make sure that this address incl. the assigned DNS server is able to resolve and reach the vRA IaaS server specified in kickstart file properly.
  • Check if vrm-agent is running on the installed virtual machine: “ps aux” should show the appropriate process ( In addition check if vrm-agent is configured for automatic start with “chkconfig –list” command. If there’s no runlevel entry for vrm-agent but the agent has been installed the kickstart script section with “” command has failed. This could be the case because cert file is not available or the IaaS server is not reachable.
  • For further trouble shooting there’s a log file GuestAgent.log which provides more in-depth information on guest agent issues as well as another log file /usr/share/gugent/axis2/logs/gugent-axis.log which shows even more details.

Enterprise customers running complex vSphere environments with multiple users usually intend to restrict access to the environment as much as possible. This is true for interactive users who get specific view and permissions to the objects they need but also for service users leveraged by other components to access vCenter. This article covers vRealize Operations and vRealize LogInsight and shows what minimum permissions are required on vCenter side to make integration work.

vRealize Operations 6.1


vRealize Operations uses 3 types of users to access vCenter environments through the management pack. While in small environments all three parts can use the same administrative account it's also possible to use different permissions for each functionality.


vrops-vcenter-collection-user.png vrops-vcenter-register-user.png vrops-vcenter-python-user.png


vCenter Collector user


This is the most important user that queries information from vCenter and receives metric data. As this user does not need any write access to vCenter, read-only permissions (existing role can be used) usually on datacenter or vCenter server level are sufficient. In case the view from vRealize Operations should be limited to a cluster, hosts or other components the scope of the user in question has to be defined more granularly.

Find some more information about it here: Add a vCenter Adapter Instance


vCenter Registration user


When adding a new vCenter Adapter configuration to vRealize Operations, registration of the vROps server with the vCenter system has to be done once. This user requires some limited write permissions, like shown in the screenshot below.

Find more information here: VMware vCenter Operations Manager 5.8 (this document has been created for an older version but the permission structure in general applies to 6.1 still)






vCenter Python Actions Adapter


vRealize Operations provides the ability to run actions/tasks on objects it manages. For vSphere this typically relates to VM lifecycle operations like shown in the screenshot below:




To access above mentioned actions and execute them on the related vCenter server a python interface is used. Obviously this requires proper permissions to allow running tasks on vCenter objects. As best practice it's recommended to create a role on vCenter which only enables the actions that are desired.

Find more information here: Add a vCenter Python Actions Adapter Instance


E.g. if a customer desires to only allow power on and power off actions a permission structure would look like shown here:





Other examples for custom python permissions:


  • Power Off VM: - Virtual Machine\Interaction\Power Off
  • Power On VM - Virtual Machine\Interaction\Power On
  • Set CPU Count for VM - Virtual Machine\Configuration\Change CPU Count (If you want to power off you will need the power off privilege above. If you want to take a snapshot you will need the create snapshot privilege)
  • Set Memory for VM - Virtual Machine\Configuration\Memory
  • Set CPU Resources for VM - Virtual Machine\Configuration\Change Resource
  • Set CPU Count &Memory for VM - Virtual Machine\Configuration\Change CPU Count, Virtual Machine\Configuration\Memory (If you want to power off you will need the power off privilege above. If you want to take a snapshot you will need the create snapshot privilege)
  • Delete Unused Snapshots for VM - Virtual Machine\Snapshot Management\Remove Snapshot (user must also have read access to the host that the vm is running on)
  • Shutdown Guest OS for VM - Virtual Machine\Interaction\Power Off

vRealize LogInsight 3.0


vRealize LogInsight has a simpler permissions structure compared to Operations. In terms of vCenter connection basically read-only permission are enough. However to inject and configure logging (add a new syslog destination) in related ESXi hosts four additional permissions are required.

Note: Make sure that access permissions are configured on top level folder of vCenter and that "propagate to children" is enabled.