VMware Cloud Community
elihuj
Enthusiast
Enthusiast
Jump to solution

Assign vRO Workflow to Blueprint

I had a question about assigning vRO Workflows to blueprints. I followed this link to associate a workflow to my blueprint. Specifically, on step 2 where it talks about assigning a state change workflow to a blueprint. After assigning a workflow to a blueprint via the BuildingMachine stub, I refresh my blueprint in vRA and do not have any custom properties added. What I ended up doing was manually adding the following as Custom Properties in my blueprint:

ExternalWFStubs.MachineProvisioned

ExternalWFStubs.MachineProvisioned.diskSize

ExternalWFStubs.MachineProvisioned has a value of the Workflow ID I am calling from vRO. When I run a request from my blueprint now I do have an option to set the disk size, and the workflow does execute correctly. This does not seem like the best way to do this, however. Is there something that I am missing in the initial configuration?

0 Kudos
1 Solution

Accepted Solutions
CSvec
Enthusiast
Enthusiast
Jump to solution

You may want to look at the Event Broker since you have this tagged as vRA 7. A lot of the old stub methods still exist and are needed to do a few things, but overall it was an (imo) confusing and opaque system.

https://virtualviking.net/2016/01/07/exploring-the-vrealize-automate-7-0-event-broker-service/

That should have some basics on how to make it all work right.

As for your exact setup and experience:

There are a bunch of ways to pin a workflow into a blueprint in vRA 6's stub style. One of them you wandered into: you can use custom properties on the blueprint to directly call a workflow ID at certain points. You can also embed thing into the sort of global execution thread that works the other way and looks for certain blueprints/phases being executed and joins in on the fun. This is how the custom hostnaming workflows used to work.

edit: to be clear, if you inject (which is what that link did), you should only see the data show up after a request (meaning the blueprint says nothing, but the provisioning/provisioned item will have additional data injected). If you add the stubs directly as a property they will show on the blueprint and the resulting object.

View solution in original post

0 Kudos
4 Replies
elihuj
Enthusiast
Enthusiast
Jump to solution

Just an update. I did notice that when I view the Properties of a blueprint during deployment, the correct vRO properties that I pass from Orchestrator are present. What I can't figure out is why they are showing up in the blueprint itself.

0 Kudos
CSvec
Enthusiast
Enthusiast
Jump to solution

You may want to look at the Event Broker since you have this tagged as vRA 7. A lot of the old stub methods still exist and are needed to do a few things, but overall it was an (imo) confusing and opaque system.

https://virtualviking.net/2016/01/07/exploring-the-vrealize-automate-7-0-event-broker-service/

That should have some basics on how to make it all work right.

As for your exact setup and experience:

There are a bunch of ways to pin a workflow into a blueprint in vRA 6's stub style. One of them you wandered into: you can use custom properties on the blueprint to directly call a workflow ID at certain points. You can also embed thing into the sort of global execution thread that works the other way and looks for certain blueprints/phases being executed and joins in on the fun. This is how the custom hostnaming workflows used to work.

edit: to be clear, if you inject (which is what that link did), you should only see the data show up after a request (meaning the blueprint says nothing, but the provisioning/provisioned item will have additional data injected). If you add the stubs directly as a property they will show on the blueprint and the resulting object.

0 Kudos
elihuj
Enthusiast
Enthusiast
Jump to solution

Hello CSvec‌, thank you. Your reply was very informative. It looks like workflows need to be tailored to be used by the Event Broker service, is that correct?

0 Kudos
CSvec
Enthusiast
Enthusiast
Jump to solution

Correct. Similar to the old stubs, there is a predictable/expected input. You basically create an input by a specific name and get a predictable input from the event broker. This is broken down for you when you create the event subscription, for example this is Machine Provisioning:

Capture.PNG

So we have a workflow bound with an input of "machine" and type of "string" which winds up being a JSON object that we can parse for ['properties'] and get the custom properties of our incoming machine provisioning request, and then override the ['name'] value as needed. We do this at the state of Lifecycle State Phase "PRE" and Lifecycle State Name of "VMPSMasterWorkflow32.Requested" in this case to intercept the build request before anything is provisioned (and at this time do a bunch of things to prep the machine provisioning.) You can also create an input of the name "payload" with a type of "Properties" and it will just give you everything regardless, and it's up to you to parse it properly.

It's a lot easier to use than the old system and a lot more transparent, but does require a bit of effort to move to it.

edit: more clarity for anyone who reads this: while you get all that data in JSON or what have you, it's still on you to update the appropriate entities. Writing back to that object does nothing, of course. So in our example up there, once we've picked a new hostname we'd need to go find our vCAC:Entity and updateVCACEntity with our new values since we actually want to override things in the request.

0 Kudos