VMware Cloud Community
christianpg
Enthusiast
Enthusiast

Best practices when writing powershell workflow activities

I'm no PowerShell-guru and it took me some time to get my scripts to behave well in workflows.

For that reason, I am missing a list of good practices when writing such scripts. Documentation is missing...

Would like to start this discussion so that we can learn from each other.


Some of my own experiences (to get the snowball rolling):

Script calls to *-Host cmdlets will fail, since the scripts are running in an odd runspace within the .Net 4 Workflow Foundation.

A call to write-host for logging or other purposes, may work some times, but run multiple concurrent activities and it will definitively fail with:

“Cannot Invoke this function because the current host does not implement it”

The same error message will appear if the cmdlet requires a confirmation, so make sure to force it through:
Remove-ADGroup -Identity myTestGroup -Confirm:$false

Be careful with the use of "continue" in try/catch blocks or loops. It may easily exit your script earlier than planned.

Don't forget that the activities are running asynchronous and in parallel - race conditions will occur.

Helpful blogs:

http://cloudyautomation.com/category/vcac-external-workflows/

"http://dailyhypervisor.com/vcloud-automation-center-vcac-5-1-workflow-designer-walk-through-add-com...

Tags (3)
0 Kudos
3 Replies
admin
Immortal
Immortal

Another quick on of the top of my head is trying to ignore certificates in powershell causes an exception.  The only known workaround that I've seen is to manually trust the cert on the DEM worker.

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

0 Kudos
christianpg
Enthusiast
Enthusiast

The regular exception handling for workflows swallows the stacktrace and only logs the error message.

By catching all exceptions within your code, you can throw more informative error messages.

0 Kudos
d-fens
Enthusiast
Enthusiast

Hi, I actually recommend to write standalone PowerShell scripts first that take a MgmtContext and a MachineID as input. That way I can write and debug the scripts outside the vCAC workflows. Within the vCAC workflow I set a "debug breakpoint" and let it wait until I have finished testing (Simplify your life while testing and debugging PowerShell scripts in vCAC – d-fens GmbH ). By using an ID as an input for a ParameterSet instead of a machine object you can call these scripts from vCO as well without worrying to convert vCO objects to .NET objects (resolving is done within the script), eg:

if($PSCmdlet.ParameterSetName -eq 'id') {

  $Machine = $MgmtContext.VirtualMachines |? VirtualMachineId -eq $MachineId;

} # if

Furthermore I split the functionality into different scripts so I can easily assign them via custom properties and build profiles (vCAC: Dynamically execute Scripts in ExternalWFStubs Workflows with PowerShell – d-fens GmbH ).

For lengthly operations I find it helpful to notify users about what is going on (Notifiying Users in vCAC via ‘Recent Events’ – d-fens GmbH and vCAC: Setting the Status of a Virtual Machine in ‘My Machines’ – d-fens GmbH).

When using the MgmtContext it is helpful to remove any unused links and entities from the tracking context before continuing (Housekeeping the vCAC MgmtContext – d-fens GmbH).

As "pfleischer" mentioned you would definitely want to use structured exception handling. But I do not understand your remark regarding break/continue within try/catch statements. For me they just work fine. The only "odd" thing I came across is when you use a break/continue within a "object | % { if(somethingHappened) { break;} }" construct. There you will not break out of the for loop (because |% is actually not a for loop).

In addition you might want to keep an eye on how to return parameters to have a consistent behaviour (like $true/$false/$null). eg return alwas $null on failure and do not re-throw (let the caller decide what to do).

You mentioned "-Host" utilities: This will not work as there is no "screen" where to write to. Besides information would not be persisted anyway. So probably the most important thing for me is "logging" to a file or event log that helps me to understand what happens once you are in production and cannot change your scripts so easily any more. You can use log4net as a fast and synchronised method across different workflow instances (like we use in our logging module: biz.dfch.PS.System.Logging now supports log4net – d-fens GmbH). In combination with a try/catch block you might want to log the stacktrace like this:

[string] $ErrorText = "catch [{0}]" -f $_.FullyQualifiedErrorId;

$ErrorText += (($_ | fl * -Force) | Out-String);

$ErrorText += (($_.Exception | fl * -Force) | Out-String);

$ErrorText += (Get-PSCallStack | Out-String);

... with that you can find the error line easily in your code. As a good log viewer on Windows I can recommend BareTail (www.baremetalsoft.com/baretailpro/). Paying the price for the PRO version is a good invest. With that you can regex over your logs in realtime.

A good alternative for the ServerCertificateValidationCallback is to use Set-PowerCLIConfiguration when you are using PowerCLI anyway. If you set the "-InvalidCertificateAction Ignore" option you actually accept virtually any certificate (even expired). (Though this is a little bit slower but you will not run in out-of-thread error for specific https requests.)

For addressing concurrency issues (parallel processing or what you called race conditions) I would implement some kind of locking like this here: Synchronisation Issues in vCAC Workflows and how to solve them – d-fens GmbH

Disclaimer: I am the author of the above cited articles. I just link them here to avoid retyping all the code again.

Ronald Rink d-fens GmbH
0 Kudos