VMware Cloud Community
jmalek1040
Contributor
Contributor

Deploy AWS Cloudformation template from vRA

I have a requirement to deploy AWS CloudFormation template from vRA self-service portal. The requirement is to use custom parameters which we need to ask end-user in the form such as prod vs non-prod, region, load balancer (yes\no), etc... 

I explored vRO plugin for AWS but didn't find any workflow or out-of-box way to deploy CloudFormation template to build the stack in AWS cloud.

Please suggest the best way to develop this automation in VRA\VRO.

Reply
0 Kudos
2 Replies
rmav01
Enthusiast
Enthusiast

The plugin only has the EC2 API associated with it, and unfortunately even that functionality is outdated. Saw on another thread that the plugin is getting an update but I wouldn't hold out hope that the CloudFormation API is part of that refresh. The thing with AWS is that they have tons of offerings and each of those offerings seems to have its own API. EC2 is pretty common, so I can see from a design perspective why VMware would only focus on that one; it pleases the most customers.

For this kind of task you're going to have to setup an XaaS since vRA can only deploy EC2 instances out of the box. The REST API for AWS is difficult to work with due to all of the handshakes you have to go through to authenticate to it, so unless you're adventurous I would steer clear from having vRO directly talk to that thing. As a workaround, I've deployed helper machines with either the AWS-CLI or AWS Tools for PowerShell installed on them, and used those as my work around for the things the plugin doesn't cover. It's a lot easier to write and manage scripts on those machines and have vRO call them with the appropriate arguments. As an example, here are the cloud formation CMDLETS ready to use with AWS Tools for PowerShell:

AWS CloudFormation | AWS Tools for PowerShell

Good luck!

Reply
0 Kudos
jasnyder
Hot Shot
Hot Shot

Another option is to still use vRO, but expose the Amazon AWS SDK directly to scripting.  You can do this by modifying the Rhino Class Shutter File property.

  1. Log in to the control center
  2. Click System Properties
  3. Add a new property with the name com.vmware.scripting.rhino-class-shutter-file and the value = /var/lib/vco/app-server/conf/rhinofile
  4. SSH into the vRO appliance and create the /var/lib/vco/app-server/conf/rhinofile file with one line, the contents of that line should be

    com.amazonaws.*

  5. Save the file
  6. Run chown vco:vco /var/lib/vco/app-server/conf/rhinofile
  7. Copy all of the Amazon AWS SDK JARs into the /var/lib/vco/app-server/lib folder.  There are also some dependencies required and must be copied to the same location: (I have not fully verified all of these are necessary, but this list will make it work)
    1. commons-codec-1.6.jar
    2. commons-logging-1.1.3.jar
    3. httpclient-4.3.5.jar
    4. httpcore-4.3.2.jar
    5. jackson-annotations-2.6.0.jar
    6. jackson-core-2.6.6.jar
    7. jackson-databind-2.6.6.jar
    8. jackson-dataformat-cbor-2.6.6.jar
    9. javassist-3.16.1-GA.jar
    10. joda-time-2.8.1.jar
    11. ognl-3.0.6.jar
    12. slf4j-api-1.6.6.jar
    13. spring-beans-4.0.2.RELEASE.jar
    14. spring-core-4.0.2.RELEASE.jar
    15. spring-oxm-4.0.2.RELEASE.jar
    16. thymeleaf-2.1.1.RELEASE.jar
  8. run chmod vco:vco /var/lib/vco/app-server/lib/*
  9. run service vco-server restart

The Rhino Class Shutter File property instructs vRO to take all Java classes listed in the file designated in the property and expose them to script within vRO.  Each class entry should be on its own line in that file.  Next, the libraries containing the classes need to be available to the server, hence copying all the JARs to the /var/lib/vco/app-server/lib directory.  When you restart the server, it will process the shutter file and now those classes should be available from script.

Here is an example EMR script that works.  This was adapted from the AWS SDK examples.  I am sure you can find a Cloud Formation example to do the same thing.

This is a workflow with 3 inputs - awsAccessKey (string), awsSecretKey (SecureString), clusterID (string).  All 3 inputs are passed to a scriptable task with the script contents below:

var credentials = new com.amazonaws.auth.BasicAWSCredentials(awsAccessKey, awsSecretKey);

var client = new com.amazonaws.services.elasticmapreduce.AmazonElasticMapReduceClient(credentials);

// predefined steps. See StepFactory for list of predefined steps

var hive = new com.amazonaws.services.elasticmapreduce.model.StepConfig("Hive", new com.amazonaws.services.elasticmapreduce.util.StepFactory().newInstallHiveStep());

var result = client.addJobFlowSteps(new com.amazonaws.services.elasticmapreduce.model.AddJobFlowStepsRequest()

    .withJobFlowId(clusterID)

    .withSteps(hive));

   

System.log(result.getStepIds());

This method works and is powerful, but has at least one drawback which is that you have to use the full namespace of every object everytime you instantiate one, so you might have to do a lot of crosschecking back to the AWS SDK to make sure you have the right names.  The other problem is the management of the AWS Access and Secret keys.  There may be a way to use the built in AWS Plugin to store them or you might have to cache them in a config element and reference it with your workflow.

The next logical problem is showing provisioned items back into vRA as manageable resources.  This can only be done with Dynamic Types or with a custom plugin.

If you do any of this, it is of course strongly recommended to do it from a test instance and document all your steps before moving on.