Skip navigation
1 2 Previous Next

Chip Zoller's Blog

21 posts

     All of us need help at some point, and it’s only too common to reach out to others to ask for help. Before you do that, however, you need to understand some basic rules. These rules exist not only to maintain a sense of courtesy and respect, but to ultimately get you the best help possible. While most of this wisdom is applicable to any online technology forum, it is specifically geared to the VMware Technology Network (VMTN), VMware’s official online community.

 

     In the interest of brevity, this article has been intentionally kept short and sweet. These are only the most basic and essential of rules to keep in mind. It is not meant to be an exhaustive list (because you probably wouldn’t read it then). But please read all of these. All of them. Seriously.

 

     There are four major categories you need to understand and read. All rules can be grouped in one of these categories.

 

A. Having an Issue

     When a problem strikes and you get the itch to reach out for help, make sure you’ve done all these first.

 

B. Planning to Post

     Once you’ve exhausted the resources on your own in Having an Issue, you want to make a post. Prior to that, understand what you’re walking in to and have a basic sense of expectation. It’s easy to forget some of these points.

 

C. Posting

     Now that you have a level set of things, you want to create that post. Ensure you do all of these things correctly and you’ll almost certainly solve your problem.

 

D. Following-Up

     You’ve created your post, but the job isn’t over. Follow through by responding and closing things out.

 

A. Having an Issue

 

     1. Help yourself first. Always. This means conducting multiple, basic searches on Google in addition to the forum on which you intend to post. If a user can answer your question with a result from a Google search appearing in the first page, you have failed this step.

 

     2. Read the documentation. The term “RTFM” comes to mind. You need to read what has already been written on the product in question. This includes the release notes. You should never skip any of these two steps.

 

B. Planning to Post

 

     1. Do not post angry or with an attitude. If you’re heated or frustrated with your issue, wait a while and calm down. Do not take your anger out on a public forum. It does absolutely no good and only turns people away.

 

     2. Always read the community rules. Yes, those you agreed to uphold when you registered an account. Read and commit them all to memory.

 

     3. Keep in mind these things when asking others for help:

    1. These are volunteers with professional lives.

    2. They are spending their spare, personal time to help YOU. That is time that could have been spent with their own friends and family, or to advance their careers.

    3. Their knowledge and expertise is worth hundreds or thousands of dollars in any other venue.

    4. You are getting all of this completely for free. You are expected to spend your time and convenience in exchange.

    5. The community members don’t owe you anything whatsoever. Assistance is provided on a best-effort basis but you’re entitled to nothing.

    6. While some are, most community members are not VMware employees which means:

      1. They don’t know futures and couldn’t tell them to you even if they did know. Don’t ask when a version is being released or if support for X will be included.

      2. This is not an extension of your support and subscription (SnS) contract. If you’re dissatisfied with a response or don’t get one, you can always open an official SR with VMware. If you can’t or don’t want to pay, that isn’t the community’s fault.

 

C. Posting

 

     1. Make your thread title appropriate and descriptive. Members should have a decent idea of what you’re asking before reading it. Examples of bad thread titles include:

    1. “Vmware”
    2. “I need help!!!”
    3. “Do not start”

 

     2. Thoroughly describe your situation. Don’t be a lazy poster as there’s no one people hate helping more than that. Include all product names, all versions, and all steps you performed to generate the result. Here are some examples of actual bad questions:

    1. “All our VMs were powered off on weekend and why dont know when and why?”

    2. “Suggest discovery tools for data center migration”

    3. “how to install SRM”

    4. “I need to be able to enable the option to edit hot. vMware vSphere 6.5 does not have the option.”

    5. “hello can someone help me with a kickstart script that will install esxi and configure a static ip address on vlan x for mgmt i would also like to add a second uplink to my vswitch and add a new portgroup for virtual machine traffic on vlan y”

    6. “Hello Expert, please help to resolve this problem <screenshot>”

 

     Each of these examples is a violation of rules discussed earlier. If you aren’t sure why these are bad questions, start at the top and read again.

 

     3. Screenshots help immensely. If in doubt, post some screenshots anyway. VMTN allows pasting them directly in the body, making it extremely easy. But please don’t be an idiot by taking a picture of your filthy monitor with your camera phone (crashes excluded). If you don’t know how to take a proper screenshot, see Having an Issue and learn. Every OS has free screenshot tools built-in. Learn how to use them.

 

     4. Don’t ask for things on a silver platter. Asking for assistance is fine. Asking for someone to hand over bespoke or custom work to you is not. This includes scripts and custom documentation. Online forum communities are not places where you go to get your butt wiped.

 

     5. Attach logs. If you don’t know where they are or how to do that, see Having an Issue.

 

     6. Why do you want to do this thing? Use case matters, because very often what you think you want to do shouldn’t be done that way. The community might have alternatives that work better for you. You never know.

 

     7. English may be a second language. People understand that you may not be an expert in English and that it’s hard. However, just because you haven’t mastered English doesn’t release you from attempting to help yourself first or providing enough information in your post.

 

D. Following-Up

 

     1. Be responsive in your thread. When you open a thread/question, have the decency to reply to any answer you might receive in a timely manner. If you care less about solving your own problem than the community members do, don’t expect them to go out of their way to help you.

 

     2. Declaring an answer. If and when you do get a correct answer, don't just cut and run away with it. Confirm via some method that the answer is indeed the correct or desired one. This could be a verbal acknowledgement or, in the case of VMTN, use the built-in "Helpful" or "Correct Answer" buttons. This not only provides a modest point reward to those who spent their time helping you but also flags your post as being answered so others who come behind you have the benefit of a correct answer as well.

 

 

To sum up these rules and reminders, let me distill it into just a few short sentences.

 

Show the community you're serious about helping yourself. You do this demonstrating most of these points: You searched, you read, you attempted to help yourself. When that failed, you thoroughly described your problem and provided ample details for others to take on your case. After getting responses, you responded in a timely fashion and were cordial, expressing appreciation for being helped. Once receiving an answer, you acknowledged it in some form. If you can follow this pattern, you will have the absolute best chance of getting your issue resolved.

 

Additional Links

 

https://communities.vmware.com/docs/DOC-1070

A good overall “how to” guide for asking and answering questions on VMTN. Focuses on the VMware Fusion product, but still good.

 

http://www.catb.org/esr/faqs/smart-questions.html
Probably the single-best FAQ for how to ask on technology message boards. Once you’ve mastered the rules in this short guide, read this FAQ to become a posting legend.

     Today, I wanted to spend some time returning to a basic principle of vRealize Automation (vRA) which is central to creating customized request forms and limiting blueprint sprawl. This is also a subject which is often requested from users. Unfortunately, those unfamiliar with vRA often think this type of customization requires other forms of complex, expensive integrations or even worse, long professional services engagements. Specifically what I’m talking about is the ability to generate drop-down lists when requesting a catalog item, as well as make those drop-downs dynamic in nature. In this article, I will provide a basic overview of how these work, why they’re useful, and how you can easily create these drop-down menus yourself.

 

     Within vRA, the ability to customize the look and operation of request forms is paramount to its success as a Cloud Management Platform (CMP). When “doing cloud,” some customers have basic needs while others are extremely complex. This wild difference in utilization requires a CMP that is flexible and can be easily extended. One of the most basic ways to provide such an extension is to allow users some form of choice during their request, and the most common way of providing that choice is through a drop-down menu which offers different options.

 

 

 

Suppose you wanted to provide your users a choice at request time allowing them to select an application to provision. While you could certainly hard-code this information in your blueprint, you would be required to offer every choice in a separate blueprint. Based on what you see here, that would be nine separate blueprints. The alternative is to create one blueprint and allow nine possible selections through a drop-down menu. If you’re reading this, you probably agree that one blueprint is simpler.

 

This all is good, but how is it possible to create these lists? I’m glad to show you how!

 

One of the core tenets of vRealize Automation is its extensibility and furthermore its flexibility. An often-overlooked mechanism central to enabling this is known as custom properties. These properties are key-value pairs that can be attached all throughout vRA and provide a rich metadata engine which can then be used to either label things, or to make active decisions based on them. For information on the out-of-the-box, supported custom properties, refer to the official reference guide here (PDF).

 

Referring back to my previous screenshot that showed applications as an example, I might have a custom property named CZ.Application whose value might equal Apache when I select “Apache” from the drop-down list. When the catalog request is selected, which is ultimately a vRA blueprint, this input form would be presented to the end user. When Apache was selected and the form was submitted, the result would look like CZ.Application = Apache.  This metadata then follows the deployment through the various pipes within vRA and vRealize Orchestrator (vRO). As it is passed from one internal system to the next, each process can see this key-value pair that has been assigned.  Since vRA is machine lifecycle aware (thanks largely to the Event Broker), we can automatically perform actions based on the custom properties’ presence and/or value tied to specific lifecycle states. For example, you might want to land virtual machines that get Apache installed on them in a different cluster than those which get installed with SQL. Since Apache and SQL are different values for the CZ.Application key, it would then be possible to perform different operations based upon each one.

 

Let’s begin by looking at the simplest form of drop-downs and how to create them.

 

 

Lesson 1: Static Values

 

In order to create the most common type of drop-down, which is a static list, we must create a new custom property. From the vRA console, navigate to Administration -> Property Dictionary -> Property Definitions. Click the green plus symbol to create a new custom property.

 

The name is, you guessed it, the name of the custom property. This is the key portion of the key-value pair.

 

The label is a user-friendly word or phrase only users will see in the request form—it has no actual value otherwise.

 

Text populated in the Description field will appear as a tooltip if a user hovers over an icon to the right of the field in the request form. This is useful to give them some direction so they know what to select.

 

Visibility describes which tenant has access to this custom property, and that can either be all tenants or just this one.

 

The display order decides how this custom property should show up (if you decide it should in the first place) on the request form.

 

Data type is what type of values are eligible to be included. I’m only going to be covering the most common type in this article: string.

 

Required means the field must have a value in order to allow the request to be submitted.

 

And the display is how this gets visualized on the request form. This article is specifically about dropdowns, so select that from the list (which is, of course, a drop-down).

 

Values allow us to statically define each and every value (the value part of the key-value pair this time) manually, or to get them from another source. For now, go with static list. We’ll come back here later.

 

And in the list itself, you can begin to define those values. The name is what users will see in the request form, and the value is what will become the value of the custom property name.

 

 

 

For example, I’ve created one we’ll build upon in this article called CZ.Continents. The list contains four static entries. A user will see “North America” in the drop-down list and, if selected, the value will be set to NA. When the user then submits the request, the key-value of CZ.Continents will equal NA and follow the request through the vRA system.

 

Once this custom property is saved, you will have a new definition in the dictionary. Let’s go and now attach this custom property to a blueprint to see its effect.

 

Click the gear icon when in the blueprint canvas editor to bring up the properties menu for the blueprint itself.

 

Navigate to the Properties tab at the top, and then the Custom Properties sub-tab on the bottom. Add a new property and find yours in the list. Make sure to check the box “Show in Request” so it is visible on the request form. Click OK -> OK -> Finish to save the updates to your blueprint and go to the catalog to see how your request looks.

 

 

 

Congratulations! You’ve just created your first drop-down selection list!

 

Static drop-downs are nice in that they’re suitable for smaller, simpler selection lists. They can be maintained within vRA, and are reusable in a custom property. However, some of the detractions are that they must be statically maintained and cannot be actioned upon when chained together so that the value of one influences the display of another drop-down. This is when dynamic drop-downs come into play, as well as a huge misunderstanding regarding the product. Let me say that more plainly:  To create dynamic drop-down lists, you do not need another third-party product!

 

And for those familiar with SovLabsProperty Toolkit, this functionality may seem similar but they are actually quite different and achieve different effects. While dynamic drop-downs provide options to end-users at request time that still require explicit selection, Property Toolkit predetermines that choice in the background, removing the need for users to make any selection at all.  Through abilities such as property synthesis and dynamic property sets, the Property Toolkit allows custom properties to be set in an automated and templated fashion that does not rely upon drop-down lists.

 

Now that some of the confusion has been put to rest, let’s explain dynamic drop-downs a little better building upon our previous example of Continents.

 

 

Lesson 2: Dynamic Values

 

 

     Assume you wanted multiple selection lists in a single blueprint. Each custom property is capable of storing its own list of static values, but they aren’t aware of what was selected in them. So although I might have a field called “Continents” I may need a second one called “Countries” which list the countries based on the prior selection. For example, if I selected North America from the list of Continents, I would expect to only see a filtered list of countries in the following selection list. It would be fairly useless if I was able to select a country outside of North America. This is what we mean by dynamic drop-downs: The presentation of values in a subsequent list is influenced by the selection of an earlier value and so on. This is where we have to involve vRealize Orchestrator (vRO) because vRA and more specifically, simple static lists are not capable of performing logic. That logic is in the form of “if this then these” in the case of multiple options or “if this then this” in the case of just a single one. By using vRO, which comes baked into vRA and which you own as a part of it, we can craft “Actions.” Actions are reusable scripting components in vRO that are able to understand this flow of logic and respond programmatically.

 

An Action is nothing more than a script (written in JavaScript) which takes some inputs, transforms them based on the code you write, and then returns an output. The input can and very probably is the value of a prior selection you made, and the output can be a list of things that are only based on that input. In so doing, we can present dynamic lists of values. Let us go through the process of creating and then consuming these Actions.

 

     For access to Actions, we need access to vRO. I’m assuming in this article you are at least familiar with vRO, have permissions to login to it, and know how and where to login. If this isn’t information you know at this point, please see the relevant vRealize Automation documentation for your version. Once logged in using the vRO client, enter into Design mode from the drop-down list to the right of the logo.

 

Click on the gear icon to enter the Actions inventory list. Right-click on the vRO object at the top of the tree and create a new module. A module is effectively a folder into which we store Actions.

 

Modules are commonly named in reverse DNS format, likely as vestiges of the earliest ties to Java in which this was a common practice. We’ll follow suit and name ours similarly.

 

Now that we have our module, we can begin to create Actions. Right-click on your module (folder) and select Add action. Give it a name. I’ll call this one “Countries”.

 

Click the pencil icon to edit your new Action. Flip over to the Scripting tab. This is where the magic happens!

 

The Scripting tab contains information the Action needs to perform its decision-making process. It does this by taking an input, doing something with that input, and then giving you an output in return. There are three steps we must complete in order to properly build this action so we can then consume it in vRA. See the screenshot below for an example of the Scripting tab and the steps we’ll outline going from top to bottom.

 

 

 

First, we must define the return type. The return type is the type of output we are telling the Action it must provide. There are all different types within vRO, so don’t get overwhelmed when you see the list. The one we’ll be dealing with is Array/string. This is essentially a list of different strings, because we in turn have to populate a drop-down box—a list, that is.

 

Second, we have to give it an input. After all, we can’t do work if we’re not told what to work “on”. Click the right arrow icon to add an input. Call it what you like, but make it a type of string. In my example, since we plan to deliver back a list of cities, we have to take in the name of a continent. Make sense?

 

And finally, we have to tell the Action what it needs to do and then hand back. In my screenshot, I’ve deliberately expanded this text so it’s friendlier on novice eyes and easier to read and decipher. And not to worry, I’m not expecting you to type all of this out. I’m not even asking you to copy and paste. At the end of the article, you’ll get all the actions you see here so you may easily import them to quickly get up and running with some dynamic drop-downs in your own vRA environment. That said, here’s the script block if you wish to follow along and try your hand at creating an Action.

 

if (continent == "" || continent == null){
return ["Please select a continent first."];}
if (continent == "NA"){
return [
"United States",
"Canada",
"Mexico"
];}
if (continent == "SA"){
return [
"Chile",
"Brazil",
"Argentina"
];}
if (continent == "Europe"){
return [
"France",
"Italy",
"Germany"
];}
if (continent == "Asia"){
return [
"China",
"Japan",
"Vietnam"
];}

 

 

The logic is pretty simple. In human readable prose, we’re saying the following:

 

  1. If the continent name is blank, let the user know they must select one first.
  2. If the continent is NA (for North America), return United States, Canada, and Mexico.
  3. If the continent is SA (for South America), return Chile, Brazil, and Argentina.
  4. If the continent is Europe, return France, Italy, and Germany.
  5. If the continent is Asia, return China, Japan, and Vietnam.

 

So in each case, the input is the name of the continent, and the output is a list of three different strings, hence the need for an array of strings back in step number one.

 

Once you’re satisfied that all is well, Save and Close this Action. If you’ve made any errors that render this script unparseable, vRO should let you know so you can go back and fix it. With that done, let’s now turn over to vRA and add this Action to a new custom property for Countries that generates our list dynamically rather than from a static list of values.

 

Create a new custom property just like we did with CZ.Continents. Everything can be the same except for one important selection. For Values, choose the radio button for External values as opposed to Static list.

 

When you make this change, you’ll see two new portions revealed: Script action and Input parameters. Click the Change button under Script action which will bring up a window that shows all available modules from vRO and the eligible Actions within them. You should be able to locate the new Action you created.

 

If you do *not* see your new Action, edit it inside the vRO client and ensure you set the return type as Array/string. vRA filters all Actions within vRO to only display those that return types which are compatible with a custom property definition, and if the return type isn’t compatible it will be absent.

 

Click OK once you’ve found and selected your action to be brought back to the custom property definition. When you’re there, you’ll now notice it has pulled in your input. Highlight this input, click the Edit button, and check the Bind box. With a binding, we are telling the input to grab the value from another custom property. This is how we tie (or bind) two or more custom properties together where the value of one feeds the input of another.

 

 

Select your custom property for Continents, click OK, then click OK to save your new custom property definition. Once that’s done and you have your new Countries custom property, go back to your blueprint and add this one to the existing one for Continents. Remember to show it in request. Mine now looks like this.

 

 

 

From here, let’s go back and request this catalog item to see how it works!

 

 

 

 

Cool! So we now have two drop-down lists in the request form, each one originating from a different custom property. The Continent property is, if you remember, a static list of values we defined while the new Country property is a dynamic one being executed as a vRO Action. When I selected “Europe” for the Continent field, I am only presented with the three options that pertain to Europe and nothing else. If I change that value to “Asia” that list filters out countries only in Asia. Pretty slick, isn’t it?!

 

     Now that we have two custom properties supplying drop-down menus with one being static and another being dynamic, let’s chain one more dynamic drop-down to this form to filter out at an even more granular level. Create a new one called “Cities” that connects to Countries and dynamically presents the cities for a given country. I won’t give away all the answers this time except for the code you can use to practice. If you’d like to skip the lesson, the Action has been pre-built for you and is available as a download at the end of this article. The order is:

 

  1. Create new vRO action.
  2. Create new custom property consuming new vRO Action.
  3. Bind the input to Countries.
  4. Add custom property to blueprint and show in request.
  5. Provision.

 

If you’d like to make your own JavaScript from Countries to re-use in Cities, feel free. Otherwise, I’ll provide a block for you here.

 

 

if (country == "" || country == null){
return ["Please select a country first."];}
//North America
if (country == "United States"){
return ["New York","Atlanta","Chicago"
     ];}
if (country == "Canada"){
return ["Toronto","Ottowa","Montreal"
     ];}
if (country == "Mexico"){
return ["Mexico City","Monterrey","Tijuana"
     ];}
//South America
if (country == "Chile"){
return ["Santiago","Valdivia","Pucón"
     ];}
if (country == "Brazil"){
return ["Rio de Janeiro","São Paulo","Salvador"
     ];}
if (country == "Argentina"){
return ["Bueons Aires","Córdoba","Rosario"
     ];}
//Europe
if (country == "France"){
return ["Paris","Lyon","Nice"
     ];}
if (country == "Italy"){
return ["Rome","Venice","Milan"
     ];}
if (country == "Germany"){
return ["Berlin","Munich","Hamburg"
     ];}
//Asia
if (country == "China"){
return ["Shanghai","Beijing","Tianjin"
     ];}
if (country == "Japan"){
return ["Tokyo","Osaka","Yokohama"
     ];}
if (country == "Vietnam"){
return ["Ho Chi Minh City","Hanoi","Da Nang"
     ];}

 

 

Take a few minutes now to go through the steps we outlined earlier and create the various objects within vRA and vRO and connect the pieces in your blueprint.

 

If you’re done, hopefully you’ve checked your request form and it looks something like this.

 

(**Multiple images removed to fit this blog**)

 

If you got this result, a hearty congratulations! You now should have the hang of static lists within custom properties as well as using vRO Actions to create fully dynamic drop-downs. But there’s still one more twist I want to throw in here. While these static and dynamic drop-downs create lots of flexibility in your request forms and reusability in your blueprints, sometimes it isn’t possible to record these values either in vRA or in vRO. In many cases, you might already have a source for these lists and need to present that list as-is in vRealize Automation. This is especially true if the list resides in a secondary system of record such as a database where other applications and processes must have access to insert, update, and delete data directly. It is still possible to use these external sources to provide the data for a drop-down, and this is the third and final use case I want to show for drop-downs: harvesting content from a database.

 

Lesson 3: External Values

 

     In the case of a database, you have data that resides within a table that you wish to pull out and present as your dynamic list. When new data is entered, updated, or deleted, no changes are necessary in either vRA or vRO because the source of truth lies external to both of those systems. I’ll illustrate how you can easily connect the plumbing, similar to the second lesson on dynamic drop-downs, but this time to an external Microsoft SQL Database instead!

 

To start, ensure you have an existing MS SQL database to work with and have sufficient credentials and network connectivity from vRO to connect to it. You will also need to know the table and column from which you want the data pulled. With those things in hand, return to the vRO client.

 

From the Run menu, access the Workflows inventory tree, and browse to Library -> JDBC -> JDBC URL generator.

 

In order to connect to that SQL database, we need to get a proper URL that has all the various pieces and parts to direct communication to the right server, instance, and database. We will reuse the URL in the next step. Run the JDBC URL generator workflow.

 

 

In the first step, we need to fill out the form with the type of database (SQL Server/MSDE for Microsoft’s), the hostname where the database is run, the name of the actual database, and credentials to connect. In this lesson, I’ve created a very basic SQL database called “MyContinents” with a single table containing, yep, a list of continents.

 

 

On the second and final step, provide the database instance name if it’s something other than the default of MSSQLSERVER, and the domain of the user supplied earlier. If you do not know the instance name of your SQL server, you can find this, among other places, in the list of Windows services.

 

Once you submit the workflow and it succeeds (if it did not, go back and check the values you provided), check the Logs tab of the completed workflow. The message shown contains the URL we will use in just a second.

 

[2018-05-10 20:38:56.622] [I] Connection String: jdbc:jtds:sqlserver://sql2014.zoller.com:1433/MyContinents;domain=zoller.com

[2018-05-10 20:38:56.668] [I] Connection to database successful

 

In this case, the connection URL we want to copy is the part beginning with jdbc:jtds:sqlserver://. Everything you see highlighted above, copy for use in the next workflow. We can also see the test connection was successful.

 

With this JDBC URL in hand, navigate to Library -> SQL -> Configuration -> Add a database.

 

Run the workflow. For the name, it’s best to call it the same name as the database to which you’re connecting. Select MS SQL for the type, and paste in that JDBC URL you received in the last workflow.

 

 

Click Next and leave Shared Session the default while entering the username (without domain information) and password. A Shared Session will ensure that the query is always executed with the same credentials no matter who is pushing buttons inside the vRA portal.

 

 

Click Submit and watch it go. When it completes successfully, flip over in your vRO client to the Inventory tab. Find and expand SQL Plug-in, and make sure you can see your newly-added database and any tables and columns.

 

 

Again, mine is very simple for illustration purposes, so I have but a single table with a single column. Ultimately we only need these three things to start pulling data.

 

Once you’re satisfied with your database configuration, it’s time to create the new Action. This one will have three inputs because we want to be able to supply database, table, and column all from within vRA without having to manually update JavaScript. This makes the Action extremely simple to connect and very quick to update if we later wanted to connect to a different database. The JavaScript is a little more complex here, so I’ll save you the trouble in attempting to figure it out. Simply copy-and-paste from the code block below or, better still, download the Action I’ve pre-built for you.

 

Here’s the code we’ll use to pull in values from a specified column within a table. Comments, denoted by double forward slashes are generously placed so it’s easy for you to identify exactly what the following code attempts to do.

 

 

//Find the name of the database from inputDB and convert it to a SQL:Database type.
var databases = SQLDatabaseManager.getDatabases();
var database = null;
for each (var thisdb in databases) {
 if (thisdb.name == inputDB) {
     database = thisdb;
     break;
 }
}
//Build the SQL query statement which searches the input database within the specified table and returns results from only the inputColumn name. 
var query = "";
query += "SELECT ";
query += inputColumn;
query += " FROM ";
query += inputDB;
query += ".dbo.";
query += inputTable;
//Execute the query, then store the results in an array. Return the array at the very end.
var list = thisdb.readCustomQuery(query)
var results = new Array ();
var d, i;
for (i = 0; i < list.length; i++) {
     d = list[i][inputColumn];
     results.push(d);
     }
return results

 

Now we can head into vRA and create a new custom property that calls this Action. Since my database is called MyContinents, and contains the same list of continents that was stored in the static list, let’s swap out that static list custom property for this one that pulls from an external SQL database.

 

 

From the Property Definitions section, I’ll create a new one called CZ.DB.Continents which uses this action. Because we defined three inputs on the Action, we must supply three values for those inputs here.

 

 

 

Note that we do not bind these inputs because we’re telling it directly which database, table, and column to use. From there, the Action knows what to fetch and return.

 

With this complete, there is one last step and that is to swap out the custom property binding of our Countries property so it keys off this new custom property and not the first one we created with static values. Edit your Countries property now to update the binding. Don’t forget to edit your blueprint to remove the static list of Continents and supply this new custom property that pulls them from the database.

 

If you go back to your request form, you should now see that your Continents custom property is pulling in those names directly from your external SQL database.

 

 

Very cool, indeed! You’ll also find that if you updated your bindings, your cascading properties with dynamic drop-downs continue to work as before. The best part about this Action is it is super simple to either reconfigure your existing custom property to point to a new column, table, and database; but it’s just as easy to create a new custom property and supply different values—all while not needing to return to vRO to manipulate code.

 

Conclusion

    

     During the course of this tutorial, I’ve introduced custom properties and briefly explained their function. We then took a look at a simple custom property which statically defines its values in a list and showed how that list can be leveraged at request time. From there, we discussed vRealize Orchestrator in the context of Actions and how, through scriptable tasks, they can dynamically supply values of further drop-down selection lists. Finally, I illustrated how it is possible to pull those reference values into a drop-down list from an external data source, specifically a Microsoft SQL database.

 

     I hope this has been a helpful and, above all, informative article that helps you grasp the concept of drop-down lists in vRA. Furthermore I hope this arms you with some good tools (Actions) you can download and put to work in your own environment, instantly delivering value! If you did not enjoy this, maybe keep that to yourself. (Only kidding, of course.) Either way, I’d love to hear your feedback whether that’s positive or negative, so feel free to reach out to me on Twitter (@chipzoller). Good luck making your CMP more valuable, feature-rich, and flexible.

 

P.S. – For a vRO Package containing all three Actions covered in this tutorial, download it from VMware {code} here. If you’d like a quick article illustrating how to import a vRO package, see this blog. If you’re only interested in the Action illustrated in Lesson 3, see this page.

     Software components (SCs) are pretty neat in vRA (although you pay a pretty penny for them), but they have some serious shortcomings. One of the really cool abilities of them is to “bind” or let a property (variable, really) assume the value of something else inside that blueprint. This makes scripts much more dynamic, but one of those shortcomings is not all properties are available to bind. In this article, I’ll give you an easy workaround that makes binding to any custom property which gets passed into the machine a possibility.

 

Property binding is accomplished by editing the property once the SC is dragged and dropped onto a machine element on your canvas and ticking the Bind box.

 

 

When you enable binding, you can then reuse the value of something else and assume its value. For example, rather than statically writing in the hostname to a SC—which you wouldn’t know since that’s assigned at runtime—you can bind a property to what would become the hostname here. Binding values have a hierarchy just like blueprints and their elements. So to accomplish this, you’d put your cursor in the Value box and press the down arrow key. In my simple, simple blueprint for testing here, this would give me the following.

 

 

The first there is the name of the machine element, second is the SC name, and third is the placeholder for all resources on the blueprint. To access the next level, select one of those three values and type the tilde character (~). It’ll then bring up a selection list of the next layer in that hierarchy.

 

 

What you’re seeing here is if I select _resource~<machine_component>~. So to choose the hostname, we can access MachineName from the list.

 

Now, whenever this SC runs on the machine that gets provisioned, it’ll store the host name as the name of that property so you can reuse it elsewhere.

 

This is all well and good and works just fine, but some things just don’t show up in this list for example properties at the blueprint level or those stored in property groups. Something else that doesn’t appear which I noticed through a recent VMTN Communities post was binding to the second IP address for a second NIC on a machine. This just isn’t in the list, and even “ip_address” which does appear only captures the IP from the first NIC (that is, nic0). The custom property for the second NIC would be VirtualMachine.Network1.Address, and although this is something you can specify on the machine element in the blueprint, if you expect it to be pulled from a network profile or some external extensibility, you can’t specify this. This means that it isn’t available for binding in the blueprint canvas. But there’s another way!

 

When a SC runs via the bootstrap agent, the agent gets the full payload from vRA which contains all of the custom properties which apply to it. You can actually see these and their values in (for Linux) the log file at /opt/vmware-appdirector/agent/logs/agent_bootstrap.log.

 

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'dailycost' value '{"type":"moneyTimeRate","cost":{"type":"money","currencyCode":"USD","amount":0.8683999899490741},"basis":{"type":"timeSpan","unit":4,"amount":1}}'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'virtualmachine.network1.addresstype' value 'Static'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'cafe.request.vm.archivedays' value '3'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'virtualmachine.software.execute' value 'true'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'virtualmachine.network0.gateway' value '192.168.1.1'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'clone_snapshotname' value 'vRA-LinkedClone'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'virtualmachine.network1.address' value '192.168.5.231'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'virtualmachine.network1.networkprofilename' value 'VLAN5Profile'.

Apr 29 2018 08:39:44.640 DEBUG [pool-1-thread-1] [] com.vmware.darwin.agent.task.ScriptTaskRunnerImpl - Appending custom property:name 'virtualmachine.disk1.size' value '5'.

 

Above is just a small snippet of that log, but you can clearly see the IP address of that second NIC is there. So if VMware won’t give it to us in a convenient manner through the UI, we’ll just have to take it by force.

 

What we need is a small function that can parse this log, find whatever custom property we want, and extract the value of that custom property. I’m glad to share that I’ve done exactly this and can give you everything you need to make that happen, for any custom property in the payload.

 

The magic sauce here is a well-crafted grep command which parses that log file to find the custom property, but only extracts the value of what is specified, and here it is:

 

grep -Po "(?<=virtualmachine.network1.address' value ')[^']*" /opt/vmware-appdirector/agent/logs/agent_bootstrap.log | tail -n1

This is a single pipeline command, so if you copy-and-paste mind there are no line breaks. What we’re doing with this command is using the Perl-style regex lookahead ability of grep to search for a string, but then discard that string in favor of what comes after it. The -P parameter performs this task while the “o” only matches the line containing this. What follows is a Perl regex that looks for “virtualmachine.network1.address” but stops looking after the value portion. It looks for this in the agent_bootstrap.log file specified at the path noted. And, finally, it pipes this result to tail and takes only the last value of the match. This last step is done because, otherwise, it would match on the grep command itself since it too is passed as text in this log file.

 

To make sense of all this, let me illustrate with a basic SC that uses this method to get that IP address. Don’t worry, I’ll upload this to VMware {code} so you can reuse and recycle at your own convenience.

 

I’ll create a SC with a single property. This property, however, will be a computed property. That is to say we won’t be specifying a value here or on the blueprint itself because it’ll be calculated at runtime.

 

 

In the script portion for the Install phase, I’ll just do the following:

 

touch /tmp/proptest.txt

testprop3=$(grep -Po "(?<=virtualmachine.network1.address' value ')[^']*" /opt/vmware-appdirector/agent/logs/agent_bootstrap.log | tail -n1)

echo "IP is $testprop3"

echo $testprop3 >> /tmp/proptest.txt

I create a file inside the guest, then I use the grep string earlier to fetch the value which then gets stored as the value of the testprop3 variable. This variable is then shown as output and also written to file. Since this is a computed property, we can then bind other properties to this one and have its value passed there. In so doing, this effectively works entirely around the inability to natively bind to an arbitrary custom property in the UI!

 

Let’s see it in action.

 

I’ll add this SC to a blueprint and request it. Wait for the execution to complete.

 

 

Then look at the details for the SC (ellipsis button).

 

 

Bang, there it is. Check the VM to ensure this is what was assigned the second NIC.

 

 

And if we cat the file created in the SC, it has written that value out there showing we can reuse it however and wherever we want.

 

[root@LEXVTST69 logs]# cat /tmp/proptest.txt

192.168.5.231

 

So as I’ve illustrated, using a little bit of trickery, we can capture the value of any custom property that is sent as a payload to a provisioned virtual machine. Simply replace the name of the custom property with the one you want to use. Hopefully this gives you new power and flexibility with your software components.

     Email is one of those things that most of us take for granted these days. We just expect email works, that it is flexible, and that it’s adaptable to our situation. Within vRA, we can also get email in and out based on a variety of scenarios. But when it comes to customizing how that looks and works, it’s a little tricky. As a result, many folks don’t even bother mucking around with it because of how complex it is what with manually uploading new email templates to the appliance, restarting services, customizing XML files, or writing vRO code and event broker subscriptions and doing all the plumbing work yourself. But thanks to the godsend that is SovLabsCustom Notifications module, all this is now a thing of the past. It not only makes it super simple to create your own custom notifications, but it’s actually kind of fun to see how quickly and easily you can give your CMP that extra edge with rich emails that can be your own HTML, include whatever custom properties you want, and go to basically anyone you want. In this article, I want to bring this module to attention and show you exactly how you can use it to extend your vRealize Automation platform to send custom email notifications to recipients of your choosing.

 

     In order to get email out of vRA for basic things like knowing if your provisioning request was successful, you need to setup a few things. First, there’s the outbound email server (Administration -> Notifications -> Email Server) so that you can connect the appliance to the mail relay. Next, there’s ensuring you have the scenarios you wish activated (Administration -> Notifications -> Scenarios) of which there are a whole host. And there’s also ensuring the email address is present in your Active Directory user profile.

 

Once you have all this, you should begin receiving some mail. But when you do, it comes to you only, and it’s just kind of bland.

 

 

You can see all of the basic information, but you don’t have any extra metadata that’s in the request (I have a couple in this catalog item) and also what if you need this to go to an operations team instead of yourself? It’s not really possible without some heavy lifting and running around. Changing what you see above in vRA is not a trivial task although there are some templates available that can assist. The latter use case of sending email upon provision and destruction to arbitrary recipients is even trickier with only some community code out there in vRO. This approach requires much more massaging.

 

     Now, let’s see how to accomplish this using the SovLabs Custom Notifications module. What’s nice about this and other SovLabs functionality is that you never have to leave the comforts of your vRA portal, never have to write and manipulate your own JavaScript, and never have to monkey with system files and services.

 

     In this scenario, we will configure notifications to send successful build and destroy emails to an operations team so they can be notified and make the appropriate changes to external systems of record. First, we add a Notifications Configuration through a simple catalog item which was published automatically for you when you added the license.

 

There’s a form we must fill out with all the conveniences you might expect, including which types of events we want to get (VMLIFECYCLE contains the provisioning aspects), but there are others including snapshot notifications and IPAM. We choose the states we want with simple check boxes. Then, we choose standard things like title and body. Oh, and as you can see, the HTML is already pre-built for you making it easy to copy and paste whatever you like!

 

 

Over on the Message Server Configuration tab, we can use an existing mail server definition as well as an existing email group definition. Let’s create a new one to see what’s here.

 

 

Everything you would expect to see, including the ability to set and save email server credentials if you enable authorization. And down on the Email Group configuration portion, here we can very easily add To, CC, and BCC addresses all in one profile! This ability makes it super simple to put whatever email(s) you like into a profile that can then be used on a per-blueprint basis if you like. Submit the configuration, check your requests to ensure it completes, and then go back to the SovLabs vRA Extensibility Modules service in the catalog. This time, select Add Notification Group.

 

Here, we’ll choose the profile we just selected giving it a label and type.

 

 

Once this is submitted and successful, you should now see a nifty new property group that was auto-created for you.

 

 

If you dive into this property group, you’ll see a number of system properties in addition to the main custom property called SovLabs_NotificationGroup and the value of which is the profile you created in the last step.

 

 

The SovLabs Notification module essentially checks to see if this property exists and, when it does, it matches the profile to the value and uses that to know what, where, and to whom to send notifications. You can then use this property anywhere in vRA you see fit, but in this demonstration we’ll consume this property group simply by attaching it to a blueprint.

 

In this blueprint, all I’ve done is found and attached this pre-created property group, then saved the blueprint. It’s really that simple and I’ve done absolutely nothing else.

 

 

Now, I’ll go to the catalog and request this item to see what happens.

 

 

You can see I have a couple of custom fields in the request. More on this in a second.

 

 

After the request succeeds, I check my email and find this.

 

 

And, bam, there it is! What you see is an HTML-formatted email that comes entirely out-of-the-box. It’s important to stress that what you see above is stock and I’ve made absolutely no modifications whatsoever to the body of the message. It’s pretty detailed already, right? And easy to read? Certainly. They’ve done a great job of putting this together, but also in making it easy to extend into whatever you’d like to see.

 

     Now that we have a basic email, how do we customize this easily to add additional things like custom properties that I had on the request form? If you scroll up, you’ll see I had a drop-down called CZAD.Users. I supplied a value for this field at request time. I now want to see that in my email that came across. Maybe this is something like an application owner or backup retention policy or other metadata to go into an asset tracking system. Whatever it may be, it’s simple to drop into these custom notifications. So let’s see that process.

 

     From your Items tab, go over to SovLabs Notification and find this profile you created earlier. We need to edit the body of it. Open the item and click the Update action item.

 

 

The body as well as the other fields become editable.

 

 

You have the choice of either editing it as-is here in the text area, or lifting it into an HTML editor of your choice. I’m going to choose the latter because I’m not a hardcore HTML programmer. I’ll just jump over to a simple online editor for demo purposes and paste in the code.

 

 

This may be a little difficult to see depending on how it gets published, but on the right I have pasted in the raw HTML, and on the left I have it rendered out.

 

I want to add a new row under the “Machine details” heading that shows my CZAD.Users custom property and its value. Now, you may notice I have a number of these words enclosed in double braces like {{ something }}. These are actually custom properties inside vRA that, thanks to the wonders of the SovLabs Template Engine, we can easily pick up and translate. Yes, really, it’s as simple as enclosing whatever custom property we want in double braces and it gets automatically translated, and it can be *any* custom property as well—even system properties!

 

 

Maybe it’s just me, but I find that AWESOME and waaay better than having to write any code to pick it up and interpret it myself.

 

So, getting back, let’s add that line underneath “Network name” so our code and, thus, the HTML gets rendered as shown.

 

 

Again, maybe difficult to see, but I’ve made the code change in the editor on the right (highlighted), and I can see the editions rendered on the left. Let’s copy and paste this entire block back into our notification definition and update it.

 

 

That was successful, now let’s re-request the same item and see what we get.

 

 

I’ve highlighted the field and its value, so we should hopefully see this now.

 

 

 

And, boom, there we go! The new email that came from the system has the custom property added and got the correct value. And, likewise, when we destroy this deployment, we get another email with the same information making it easy for an operations team to get all the info they need.

 

 

     Now, hopefully, you can see how huge this notification module is, how vastly powerful and flexible it can make your vRA infrastructure, and, above all else, how much easier it is to setup and administer versus everything else out there. This is a great way to add value to your CMP and can solve a whole variety of use cases. Now all that’s left is to go forth and see how you can use this in your organization!

     Custom properties (CPs henceforth) are the lifeblood of vRA, as many of those reading this know. They are a rich system of metadata which can influence not only the decisions made by vRA proper but also extensibility through vRO. CPs are life, effectively, and one of the biggest challenges is managing those CPs and having their values set simply and without too much complexity. A common problem arises in the face of this which amounts to CP sprawl in a request form. When CPs are used for most major decisions, the user in the request form may be bombarded with a whole array of them which can slow down provisioning time, increase adoptability, and lead to user error (by virtue of incorrect selection in the form). Property Toolkit can help eliminate almost all of these through transparent, background, silent assignment of other properties.

 

     Consider this scenario: Your vRA catalog requires users to select a number of values which steer the deployment in a variety of ways. For example, you ask the user to select the following: Application, Environment, Storage, and Network. But you require them to select the correct value based on an earlier selection, like where the application is “Oracle” they need to provision this to “Production” only as other environments are not licensed appropriately. And also, given the environment, there is a specific network associated with only that environment and so they have to pick the correct one. Now, this can be done with vRO actions, but wouldn’t it be nice if you could only ask them to choose *one* thing in the form and all other decisions were made automatically? Further, wouldn’t it be awesome if all that logic *didn’t* require you to design new workflows or actions in vRO or crunch new JavaScript? Well, if this is something you’re looking to do (or any variant thereof), then Property Toolkit is your savior because that’s exactly what it’s designed to do (plus more). Let’s see how to do this in a real life example.

 

We need to set several CPs to steer a deployment to the correct place. These are as follows:

  1. Environment. This sets the target cluster where the machine(s) will be built.
  2. Reservation Policy. Selects the right reservation policy (which can open up things like endpoint and datacenter).
  3. Network Profile. Sets the correct network on which the machine(s) will be attached.
  4. Name. Provides part of the name in our custom naming standard.
  5. Storage Reservation Policy. Sets the storage which will be consumed.

 

Now, we could list each of these individually in a request form and have the user select each and every one (after making sure to tell them how to complete the form). But even easier, let’s define only a single CP and have each of these 5 CPs get their value based on it. I’m going to create a CP called My.RequestForm.Prop and provide three possible values: A, B, and C.

 

 

I’ll show this as a dropdown in the form. These values can be whatever you like, but I’m just using simple letters in this case for illustration. For example, instead of A, B, and C these could be application names like Oracle, JBoss, and Apache.

 

Next, I want to set the first of the dynamic properties, Environment, based on the value of My.RequestForm.Prop. The Environment CP will be called CZ.MyProp.Environment and the choices I have are Prod, Dev, and QA. I’ll map them one-for-one to the values you see above in My.RequestForm.Prop. In other words, if a user selects A from the list, then the value of CZ.MyProp.Environment will be dynamically set to Prod. If the user chooses B then it’ll be Dev and so on. In order to do this, all we need to do is create a new property on the deployment and call it SovLabs_CreateProperties_<anything>. The <anything> portion can truly be whatever you want—it’s only a label for your organizational purposes. So long as the property begins with SovLabs_CreateProperties_ then it’ll invoke the Property Toolkit module. The value of this property can be a variety of things as explained in the documentation, but in this case let’s use an array of single objects. The value of this property becomes the value of another property. For example, if I wanted to define a new property in the simplest way, I could set the value to this:

 

[{"name" : "CZ.Cities", "value" : "Texas"}]

 

When the module ran, I would then get a new CP defined on the deployment with name of CZ.Cities and a value of Texas. But, because everything SovLabs does is templated, we have a whole host of operators at our disposal thanks to the templating engine. The one which we want to use here is the case/when operator. This will allow us to switch to a different value based on another value. So getting back to the new CP we want to set, the value of that property would get defined as such:

 

[{"name" : "CZ.MyProp.Environment", "value" : "{% case My.RequestForm.Prop %}{% when 'A' %}Prod{% when 'B' %}Dev{% when 'C' %}QA{% else %}UNKNOWN{% endcase %}"}]

 

It’s fairly straightforward. If the value set in My.RequestForm.Prop equals A then assign the value of Prod to CZ.MyProp.Environment. If that value is B then let it equal Dev and so on.

 

Now we have that one, let’s define the others with the same basic statement. To simplify this, you can put them all in a single property group and attach that property group to your blueprint. To save time, I’ve drawn up the following table which shows the combinations.

 

CP Name

CP Value

My.RequestForm.Prop

A

B

C

CZ.MyProp.Environment

Prod

Dev

QA

CZ.MyProp.ResPol

ReservationA

ReservationB

ReservationC

CZ.MyProp.NetProfile

NetworkA

NetworkB

NetworkC

SovLabs.Naming.App

TST

ORA

APA

VirtualMachine.Disk0.StorageReservationPolicy

Diskstation

vSAN

Diskstation

 

Effectively, then, the value of a single property (My.RequestForm.Prop) will influence the outcome of five other properties dynamically. Graphically, it can be represented as the following.

 

If the value of My.RequestForm.Prop equals A then all the following values are set below it. If B, then all those in that column apply, etc. In this demo, I built a single property group and stashed all these properties there, although they could be in just a single dynamic property definition if you wish.

 

 

The exception is the CP VirtualMachine.Disk0.StorageReservationPolicy as this must be set on the machine element in the canvas and not the blueprint level.

 

If we flip over to the request form, we can see how simple this can be when presented to the end user.

 

 

Let’s select A then deploy to see what happens.

 

 

 

Based on the value A the Property Toolkit then set the 5 CPs we were after, including SovLabs.Naming.App which I am then consuming in another SovLabs module for Custom Naming (which produced the “TST” portion of the hostname you see). It got the correct Environment, it got the correct Reservation Policy, Network Profile, and it also went to the correct storage because this property, while on the machine element, still was able to be set from the initial value of A.

 

 

Although I’ve obviously created a few of these properties myself to illustrate what you can do, you can use this module to set any CP, even reserved system properties such as the one that controls the storage reservation policy. Can you not see how incredibly flexible this can make your deployments? We can ask the user to make but a single decision, and based on that outcome we can then dynamically set any number of other CPs. That’s amazing if you ask me and something that, prior to the Property Toolkit, required a whole heap of complex JavaScript and vRO plumbing to get done.

 

So, this is cool, but let’s take it one step further. Let’s let My.RequestForm.Prop influence another CP, and then let’s let *that* value influence another CP. In this manner, we can create cascading dynamic property assignment. Here’s what I mean.

 

Let’s still ask the user to pick a value of My.RequestForm.Prop. The value they choose will still influence the Environment CP (CZ.MyProp.Environment if you recall). But, rather than basing the Reservation Policy CP on My.RequestForm.Prop, what if we could determine that from the Environment CP? Graphically, it’d be represented like this:

 

 

And to put that in context with the other properties, the altered flow chart would result as the following.

 

 

In order to distinguish, I’ll change the possible values of CZ.MyProp.ResPol from the table previously to be the following: ReservationPrd, ReservationDev, and ReservationQA. Now, let’s change the definition of CZ.MyProp.ResPol to key off of CZ.MyProp.Environment and not My.RequestForm.Prop.

 

[{"name" : "CZ.MyProp.ResPol", "value" : "{% case CZ.MyProp.Environment %}{% when 'Prod' %}ReservationPrd{% when 'Dev' %}ReservationDev{% when 'QA' %}ReservationQA{% else %}UNKNOWN{% endcase %}"}]

 

Let’s make the same request and see what happens this time.

 

 

Amazing! All we’ve changed is the definition for CZ.MyProp.ResPol in this case and nothing else. Can you imagine the possibilities this opens up? The freedom, flexibility, and power the Property Toolkit enables your vRA to have is limited only by your imagination.

 

     To recap, then, in part one of the Property Toolkit series, we showed how it can synthesize new CPs based on the value of others. This was illustrated with the vCenter Folder use case. In part two here, we are creating dynamic property sets in which the value of multiple different CPs get their values from other CPs. We did this in two ways. The first was to assign five separate properties based on the value of one reference CP. The second was to cascade this logic by letting one equal another which equals yet another. In both cases, we achieved a dynamic assignment of various CPs even in different places by exposing just a single decision in the request form.

 

     I really hope you’ve found this article to be helpful and that it has stirred your imagination with all the various ways you can use Property Toolkit to make your life simpler and your vRA much more powerful all while reducing complexity. It really is game-changing integration that opens so many doors all while eliminating the need to write, test, and maintain custom code. If you have any thoughts or comments about this article, please feel free to reach me on Twitter (@chipzoller).

Have you ever needed or wanted to dynamically get the users within an Active Directory group through vRA? The process I hoped would already exist but, unfortunately as I found out, the existing vRO Active Directory plug-in does not contain a pre-built workflow or action to handle this. As I typically do, I find my own way, but my trials and tribulations can easily be to your benefit as you’ll see. The use case for getting users within a group can vary. In my case, there is a customer for whom I’m building a CMP and they wish to dynamically pull in all the users within one specific AD group to which they can assign provisioned machines for metadata purposes. This metadata would then be sent in a custom notification to a recipient which tells them of the new machine(s) that has/have been built and this user chosen at request time. So this group membership needs to be displayed in a request form in a drop-down and the requestor able to pick a member. With a little bit of vRO work and a cool trick in vRA I learned, this is pretty easy and others may find it useful.

 

              In my test environment here, I’m on vRA 7.3 with an external vRO 7.3. None of that should matter as all these steps should be applicable to earlier versions as well. That said, I updated my vRO Active Directory plug-in to the latest available here. The newest version is v3.0.7 as of this writing. Once the plug-in is updated, run the “Add an Active Directory server” workflow found in Library -> Microsoft -> Active Directory -> Configuration.

 

Step 1a is fairly self-explanatory. Host in my case is just a single AD server. Base is the root object of your AD in DN format.

 

 

Since this is for use in vRA, we want to use a shared session with a service account and avoid per-user sessions.

 

What’s nice in step 1c is you can provide multiple AD servers that can be attempted. The algorithms are Single Server, Round Robin, or Fail-Over.

 

 

And, finally, add a timeout value to wait before failing. I chose 5 seconds but, depending on your AD size, this may need to be longer.

 

Once the workflow successfully completes, verify you have it in your inventory explorer.

 

 

The tree here shows it is indeed working.

 

Once you’re good there, import the action I’ve pre-built. The JavaScript required in order for it to function is fairly rudimentary:

 

userArray = new Array();

var usersInGroup = userGroup.userMembers

for each (user in usersInGroup) {

     //System.log(user.Name)

     userArray.push(user.Name);

     }

//System.log(userArray);

return userArray;

 

I’ve commented out a couple lines that allow logging the user names to System just for development and troubleshooting purposes to ensure the results are returned and in the correct format.

 

With this action imported, flip over to vRA. Go create a new Custom Property definition. In the one below, I’m calling it CZAD.Users and choosing to get external values from the new action.

 

Click on the userGroup input parameter and edit it. The value is going to be the identifier in vRO’s AD inventory that corresponds to the user group which we want to list the users.

 

#_v2_#,#UserGroup#,#04452c1c-2a57-4bbb-bf1d-ada37c2edea0#,#CN=Middleware,OU=vCAC,DC=zoller,DC=com#

 

Flip back over to vRO and browse in your AD inventory tree to find the group whose members you wish to list. Click the user group in the tree and check the General tab.

 

 

You see VSO ID as shown above? This is the value we’ll copy and paste into the input parameter definition back in vRA. This is a unique ID which essentially is an API “shortcut” to reference this one specific group and no other. With this, we can avoid having to hardcode the name of this group into our action or pull it from some other place in vRO. Since userGroup is the input object, we can simply supply this value with vRA and be done with it—so one place to go if you wish to change that group later.

 

With this VSO ID copied and pasted into the input parameter, save the custom property. Let’s add it to a blueprint and see if it works.

 

 

Go to the request form now.

 

 

And boom, there are the users in that AD group! Don’t believe me? Go check the membership for yourself in AD to compare.

 

 

And there you have it! With this simple action, you can pick a user from an AD user group and consume it as a string-based custom property anywhere you like in vRA.

 

Very simple little action that does a simple thing, but it solves a small use case and others may find it useful as well.

 

 

Download getADGroupMembership from VMware {code}

vRealize Automation is a very metadata-driven system through capabilities called custom properties. These properties—which are nothing more than key-value pairs—can be applied at multiple levels throughout vRA and cause it to behave in different ways. The same sort of metadata drives a lot of decision within vSphere which are external to vRA. But applying these tags at a vSphere level has been somewhat of a challenge in the past when driven through vRA because there is no out-of-the-box support for vSphere tags. vSphere tags are useful in a whole variety of ways. Some examples include attaching descriptive labels to objects that identify ownership, application, or customer. Others include things like identifying backup jobs and policies around which they’re based. Being able to incorporate these vSphere tags into vRA can therefore be very important and something I see more and more customers adopting within vSphere and looking to harness in vRA. Recently, the SovLabs group have taken up the yoke and produced a vSphere Tagging module for vRA which allows super simple assignment of those vSphere tags that involves no custom vRO code or mucking around with fragile plug-ins of any sort. So in this article, I’m going to illustrate how this module can be leveraged to apply vSphere tags through vRA and the power and flexibility afforded you in doing so.

 

Unlike a lot of the SovLabs functionality which are easily configured through XaaS Catalog items like adding an Active Directory configuration or a Satellite configuration shown below, the vSphere tagging utility is really driven through custom properties (like all of SovLabs’ modules) and doesn’t require configuration. The only thing necessary to get started is a vCenter endpoint.

 

 

 

After selecting the catalog item, we need to provide information on the vCenter Server endpoint where tagging will be applied. Why not just configure the vRO vCenter plug-in, you ask? A moment of departure is necessary at this juncture.

 

 

This endpoint is created internally to the SovLabs configuration and in no way relies upon the pre-existing vRO vCenter plug-in. That’s a convenient thing because to do vSphere tagging without SovLabs requires configuring the vCenter plug-in, configuring vAPI plug-in, and then patching together workflows on your own. Not exactly simple and elegant. One of the great things the SovLabs team has done when designing and building their solutions is that they embed the entire Java SDK for vSphere within their own plug-in (hence the size). This has the benefit of zero reliance upon any other vRO plug-ins allowing them to “own” the configuration and connection from start to finish. So, essentially, when configuring this vCenter endpoint requested through the catalog, you’re enabling an internal connection that is self-contained.

 

Anyhow, configure the vCenter endpoint with the requested fields. I’m using an embedded PSC model so I’ll check the box. And if you want to save this credential for later use, check that box as well.

 

 

 

Once that request completes successfully, you’re ready to roll! Time to create some tags

 

            The tagging functionality, as mentioned earlier, is entirely driven through the presence of custom properties on attached items within vRA. The module will watch for a custom property with a specific name and, if that name is present, it will attempt to assign tags based on its value.

 

The magic custom property that enables tagging is called SovLabs_CreateTags_VMW_<something>. The <something> part can literally be anything you wish and is only a descriptor to aid you in your organization. The value of this property is the tagging information itself which can be specified in a number of ways and includes the tag name, category name, and other properties. To show how easily it can be to assign an existing tag called “Infrastructure” which belongs to the category “Department”, we create the following custom property on a machine in our blueprint.

 

SovLabs_CreateTags_VMW_MyTest                         [“Infrastructure” , “Department”]

 

 

 

Submit the request, wait for it to complete, then check the resulting VM. You’d get the following.

 

 

 

Boom. It’s that simple. The presence of the “magic” custom property name invokes the tagging functionality, and the value of that property has information on the tag(s) to be applied. Nothing else you need to do.

 

What if you want to assign multiple tags in one go? No problem, there are a couple ways. You could create multiple custom properties each beginning with the necessary prefix. You could create a property group and put those properties within it. Or you can just use the one custom property and add multiple entries within the same value like this:

 

[[“Infrastructure” , “Department”], [“Oracle” , “Application”]]

 

  I’m using an array of arrays here, but the SovLabs engine allows several different ways to pass in the values as described in the docs including JSON. But again, pretty simple, right? The SovLabs tagging module also allows you to create both tags and categories and assign them. How do you do that? Simple. Just like you assign tags. If the tag or the category is not found, it automatically creates it for you, then assigns it. In the above example, if Infrastructure/Department had existed and the tag group “Application” existed but not the tag of “Oracle”, then the module would know to create “Oracle” for you. You can even set the description for that tag in the value for your custom property. And when creating a tag category, you can also set the cardinality of that category right through the module as either single or multiple with a simple true/false in your value.

 

 

 

Hopefully by now you’ve seen and understand how simple vSphere tagging can be with the SovLabs module, but that’s only the beginning as far as the flexibility it allows. Because everything in SovLabs uses a template engine and the behavior is essentially controlled by custom properties, you have a lot of power at your fingertips for creating vSphere tags.

 

Let’s say you need to assign a tag to every VM that results from a given business group, but you have multiple business groups and multiple blueprints, so you can’t hardcode custom properties at the blueprint level. That’s no problem. We can set the value of the property that creates tags to be an external custom property. For example, I can create a custom property on the business group level called SovLabs.Tagging.Department and provide it the value of Infrastructure.

 

 

 

I want every blueprint that comes out of this business group to get the tag called “Infrastructure”. So in my blueprint, I can create the following custom property that is for universal tagging by business group like so.

 

SovLabs_CreateTags_VMW_BusGroup  [“{{SovLabs.Tagging.Department}}” , “Department”]

 

 

 

Each time this blueprint is provisioned, it pulls whatever the value is at runtime for SovLabs.Tagging.Department and sets that as the name of the tag found within the tag category called “Department”. The end result is that, for this deployment, I still get a tag named “Infrastructure” assigned to the VM.

 

            This functionality coupled with the ability to template everything in SovLabs makes it extremely flexible and powerful capable of solving almost any use case when it comes to tag assignment. So not only is setup super simple and doesn’t require you to mess with vRO plug-ins in any way, but configuration is equally simple through a combination of a single vRA catalog request and custom property assignment. It’s up to you to think of the possibilities of how you can now use vSphere tagging in a very streamlined and flexible way. I hope you’ve found this useful, now the only thing left is to go forth and tag to your heart’s delight!

Removal of datastores is one of those things that seems like it would be simple, and it’s not complex, but there a few steps and you have to do them in the correct order to produce the best result. There are also some KB articles out there that are rather outdated, have mixed information, and don’t cover the graphical options available today. So in this short article, I’m going to run through the best practice way to remove a datastore from an ESXi host without destroying its data. These steps are generated on vSphere 6.5 U1 using the web client. Using this method is preferable for workflows like unpresenting storage from a set of hosts and migrating it to others, lab tests, or anything else where you want a clean inventory view while preserving VMs.

 

Right, so here’s what my inventory consists of.

 

Hosts and Clusters View

 

Datastores View

 

 

I’ve got an iSCSI datastore mounted to two clusters of two hosts and there are two VMs on this datastore. The objective is to remove the ClusterA-iSCSI-01 datastore from all hosts leaving no traces behind in inventory. What we don’t want is for them to show up like this, which, if you’re reading this, you’ve probably made that mistake before.

 

 

 

Let’s go through the steps here:

 

  1. Power down all VMs on the datastore you wish to remove.
  2. Unregister all powered down VMs from inventory.
  3. Unmount the datastore from all hosts.
  4. Detach the device from all hosts.
  5. Rescan for storage devices.

 

Five steps, and you need to do them in this order if you want clean removal of the datastore from your vCenter’s inventory while preserving the data. Let’s go through them one by one.

 

 

1.  Power down VMs.

 

Pretty simple. All VMs on this datastore must be powered off.

 

2. Unregister all VMs.

 

Again, straightforward. You cannot unmount a datastore that has any registered VMs on it, which includes templates. Right-click each VM and unregister it.

 

 

 

3.  Unmount the datastore.

 

  With all VMs unregistered, right-click on the datastore and unmount it.

 

 

Make sure you select all hosts.

 

 

 

Wait for the vSphere task to complete. Once done, you should see this in inventory.

 

 

 

 

4.  Detach the disk device.

 

 

Now that it’s unmounted, we have to go to each ESXi host and detach the disk device that corresponds to that VMFS volume. Go to the host and choose Configure -> Storage Devices.

 

 

 

Highlight the device corresponding to the datastore you just unmounted. From the Actions menu, choose “Detach” or click the icon.

 

 

 

Click Yes to confirm.

 

Once detached, the state should change.

 

 

 

Repeat this process on all hosts.

 

 

5.  Rescan for disks.

 

Still on the Storage Devices screen, click the Action menu and choose Rescan Storage.

 

 

 

Click OK for both check boxes.

 

Rinse and repeat on all ESXi hosts.

 

Once all hosts have been rescanned, you’ll see the previously inactive datastore has vanished.

 

 

 

At this point, you may now have your storage administrator unpresent the LUN from the ESXi hosts but not before. Once they have done so, if you were to rescan one more time, you’ll see the old device that was in a Detached state has been removed from the list entirely.

 

And that’s it. It’s not a complex process, but it needs to be done in a certain order to make your vCenter inventory and host view all tidy.

daphnissov Guru
Community WarriorsvExpert

vCSA File Backup Fails

Posted by daphnissov Dec 9, 2017

I encountered this issue in my home lab recently whereby vCSA 6.5 U1c was failing the file-based backup through the VAMI with the message “BackupManager encountered an exception. Please check logs for details.” A very generic error message to be sure, and not at all helpful. I checked the log responsible at /var/log/vmware/applmgmt/backup.log and saw the following message.

 

2017-12-09 02:46:10,833 [ConfigFilesBackup:PID-38335] ERROR: Encountered an error during ConfigFiles backup.

Traceback (most recent call last):

  File "/usr/lib/applmgmt/backup_restore/py/vmware/appliance/backup_restore/components/ConfigFiles.py", line 223, in BackupConfigFiles

    logger, args.parts)

  File "/usr/lib/applmgmt/backup_restore/py/vmware/appliance/backup_restore/components/ConfigFiles.py", line 132, in _generateConfigListFiles

    tarInclFile.write('%s\n' % entry)

UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 100: ordinal not in range(128)

 

Very odd, especially the last line about the asci codec error. I went and looked at the Python script to see why it might fail at this step. I checked line 132 to see what it was doing.

 

 

After some more looking through the script, it looks fine. It’s stripping characters off paths to build the file list. Nothing unusual. I just do a cursory check around the vCSA filesystem to see what might be going on. Something catches my eye when I do a df -h /.

 

 

It might be difficult to see, but the last entry is a UNC path to a SMB share. I then remember that I created a Content Library that is connected via SMB to my Synology. Looking at the Python again, I think it’s not handling two forward slashes well and bombing out because of it. Prior to that, the backup task is pretty simple.

 

 

 

 

 

The second screenshot in the backup wizard was initially alarming as the “common” part was showing 0 MB in size, which clearly isn’t right. When running the backup, it fails in very short order and produces no files in the destination path.

I delete the Content Library, make sure it’s unmounted from the filesystem, then attempt the backup again. Complete success.

 

TL;DR:  vCSA file-based backup has a bug which fails if you have a content library mounted over SMB. Remove or unmount the Content Library in order to have the backup succeed.

Not a long post, actually more like a "note to self" moment but figured I'd post it where others can see. Probably won't be seen or cared about, but this was really really odd and thought I should write it down for posterity's sake. Anywho...

 

I recently did some major networking reconfigurations in my home lab that involved decommissioning an older D-Link (yes, I know) managed 1 GbE switch in favor of a much nicer L3 Dell PowerConnect 6248 that I got for a song on eBay. I wanted to move over to the L3 switch so I can have a lab that's better aligned to a real enterprise environment, and so this will allow me to do just about everything you can see in an enterprise from the perspective of VMware technologies anyway, and this includes, most importantly, NSX. As part of this process, I wanted to move entirely away from vSSs for VM communication and on to vDS utilizing various VLAN-backed portgroups. I'm an extensive vRA user/tester/developer so vRA is plumbed into everything. Once I got the new switch configured, up and running, and everything migrated to it, I created the new vDS and port groups. Each host had a new uplink dedicated to this vDS. Existing standard switches contained the default "VM Network" port group that still existed on vswitch 0. Once getting the new vDS up, all VMs were migrated over to it. Templates were migrated next. All good, no interruptions.

 

Once the vDS migration was complete, I reconfigured my vRA lab environment to consume the VLANs through my reservations. I simultaneously deactivated the legacy "VM Network" port group that exists on the standard switches. Later, I deleted this port group from all hosts. After a couple provisioning tests, the VMs weren't landing on the vDS port group they should. Strange, I thought, they should be. At the same time, I noticed in the Networking inventory view of my vCenter that this VM Network phantom port group remained but with no hosts showing. wtf.jpg?? Now, I've encountered this in the past, but this was usually due to templates not being updated. I made sure to do this and reconfirmed no ports were active for any VMs or templates. I also checked each host from esxcli to verify this and there was no VM Network port group. I also checked vCenter via PowerCLI and *still* no VM Network port group. I decided to give vCenter the ol' razzle dazzle and reboot the sucker. Once back up, the VM Network port group is still freaking there! I'm thinking at this point that something in Postgres is jacked and I'm going to have to excise the tumor manually. But I'm thinking that this problem is somehow linked to my vRA provisioned test machines not getting the correct VLAN. I then remember that I'm using linked clones to speed the provisioning process. When I took those snapshots, I hadn't yet introduced the vDS and so the templates were joined to VM Network. But I thought that shouldn't matter since vRA should be reconfiguring them via the customization that occurs. I convert the templates to VMs, delete the snapshot, ensure they're joined (again) to the vDS port group of my choice, and re-snapshot. Once I deleted those snapshots, the phantom VM Network port group disappeared. And after running a data collection in vRA yet again and updating my blueprints to select the new snapshot name, provisioning was working to the correct port group in my reservations.

 

Anyway, super odd and something I've never run across before. Hopefully it helps someone (probably won't, though).

 

TL;DR - If you have snapshots of VMs or templates that were taken on an old vSS, you'll need to commit the snapshots or else phantom port groups may remain. vRA may also fail to provision to the correct port groups on linked clones unless you do so as well.

 

     In vSphere 6, the Platform Services Controller (PSC) contains a number of pieces of functionality including the SSO components, certificates, licensing, and other core parts that make vCenter function. Because of its importance, it is possible to join external PSCs that share the same SSO domain in a replication topology whereby their states are kept synchronized. When a change is made to, for example, a vSphere tag, that change is asynchronously replicated to the other member(s). In the instance where a single PSC failed, vCenter could be manually repointed to another PSC that was an identical copy of the one that failed. Use of a load balancer in front of replicating PSCs can even provide automatic PSC switching so a manual repointing is unnecessary. This is all well and good and tends to work just swimmingly, however what happens when PSC replication fails? Bad things happen, that’s what. The natural and next logical question to ask then becomes, “how can I know and be informed when my PSCs have stopped replicating?” To this, unfortunately, there doesn’t seem to be an out-of-the-box way that is displayed in any GUI present in vSphere 6.5. Although you can setup replication through the GUI-driven installer when deploying PSCs, that’s basically the end of the insight into how replication is actually functioning. And when vCenter is pointed at a PSC with a replica partner, the additional PSC shows up under System Configuration in the web client yet does not inform you of replication success or failure.

 

 

Clearly, there is room for improvement here, and like the majority of my articles I want to try and find a solution where one currently doesn’t exist. In this one, I’ll show you how you can use a combination of Log Insight and vRealize Operations Manager to be informed when your PSCs stop replicating.

 

            In my lab I’ve setup two PSCs in the same site and same SSO domain (vsphere.local) which are replica partners. I have a single vCenter currently pointed at PSC-01. When I make a change to any of the components managed by the PSC through vCenter (or even the PSC-01’s UI at /psc), that change is propagated to PSC-02. This replication happens about every 30 seconds. By logging into PSC-01, we can interrogate the current node for its replication status using the vdcrepadmin tool. Since its path is not stored in the $PATH variable, it’ll have to be called directly.

 

root@psc-01 [ / ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w VMware1!

Partner: psc-02.zoller.com

Host available: Yes

Status available: Yes

My last change number:             1632

Partner has seen my change number: 1632

Partner is 0 changes behind.

 

From this message we can see its replica partner (psc-02.zoller.com), if it’s currently available, the last numbered change seen by the source PSC, the last numbered change seen by the replica partner, and then how many changes behind that represents. When everything is functioning properly, you should see something like above. If you were to run the same command on PSC-02, you’d get very much the same response minus the change numbers.

 

Now, if I go over to PSC-02 and stop vmdird, the main service responsible for replication, and re-run the vdcrepadmin command on PSC-01, the message would look like the following.

 

Partner: psc-02.zoller.com

Host available: No

 

And if we look at the log at /var/log/vmware/vmdird/vmdird-syslog.log, we see corresponding failure messages.

 

17-11-29T14:48:00.420592+00:00 err vmdird  t@140610837108480: VmDirSafeLDAPBind to (ldap://psc-02.zoller.com:389) failed. SRP(9127)

 

Yet despite replication obviously failing, vCenter shows no change in status. Although there is a pre-built alarm in vCenter called “PSC Service Health Alarm” this only applies to the source PSC (where vCenter is currently pointed) and has no knowledge of replication status. Total bummer as you’d really hope to see something trigger inside vCenter. Maybe one day. (Cue sad face and violins.)

 

Anyhow, if vCenter won’t do it for you, we’ll do it ourselves. Since we know the logged messages that occur when failure is upon us, we can use Log Insight to notify us. And, furthermore, if we integrate Log Insight with vROps, we can send that alert to vROps and have it associated to the correct PSC virtual machine. For this to work, we obviously need both applications, and we also need to integrate them. See some of my other articles that cover this as I won’t spend time on it here, but it’s a pretty simple process.

 

After they’re stitched together, we need to get logs from our PSCs over to Log Insight. Log into the VAMI for each PSC on port 5480. Plug in your vRLI host in the Syslog Configuration portion as shown below.

 

 

Do this for both PSCs.

 

NOTE: Although the Log Insight agent can also be installed on the PSCs to provide better logging, it is not required if you want visibility into the replication process.

 

Verify from your vRLI server that logs are indeed being received from those sources. They should show up in Interactive Analysis and Administration -> Hosts.

 

 

Also confirm that the vmdird log stream is coming in. Since Log Insight already extracts the proper fields, we have the convenience of just asking for that stream directly in a query.

 

 

Change the resolution to a longer period to ensure logs are there. And if you want to watch how logs change based on activities you perform on the PSC, try to create a new user in the internal SSO domain, add a license, or create a vSphere tag. Update the query you’ve built and see what you get.

 

 

 

Starting at the bottom, you might be able to figure out I added a new user, and then about 15 seconds later that change was replicated to the peer PSC as evident by the top line.

 

Once we have verified good logs, we can create an alert based on them. Stop vmdird on the PSC replica and watch the failure logs come rolling in.

 

root@psc-02 [ ~ ]# service-control --status

Running:

applmgmt lwsmd pschealth vmafdd vmcad vmdird vmdnsd vmonapi vmware-cis-license vmware-cm vmware-psc-client vmware-rhttpproxy vmware-sca vmware-statsmonitor vmware-sts-idmd vmware-stsd vmware-vapi-endpoint vmware-vmon

root@psc-02 [ ~ ]# service-control --stop vmdird

Perform stop operation. vmon_profile=None, svc_names=['vmdird'], include_coreossvcs=False, include_leafossvcs=False

2017-11-29T15:12:57.617Z   Service vmdird does not seem to be registered with vMon. If this is unexpected please make sure your service config is a valid json. Also check vmon logs for warnings.

2017-11-29T15:12:57.617Z   Running command: ['/sbin/service', u'vmdird', 'status']

2017-11-29T15:12:57.655Z   Done running command

Successfully stopped service vmdird

 

 

We can see the failure messages now, and from these we can create a new alert. Highlight the “VmDirSafeLDAPBind” part and add it to the query.

 

 

Now, highlight “failed” and do the same. Put them on two separate lines because entries on the same line are effectively an OR statement. Your built query should look like the following.

 

Remove the hostname portion so as to make this alert more general. Now, on the right-hand side, create an alert from this query.

 

Complete the alert definition including description and recommendation as these are fields that will show up when we forward it to vROps.

 

 

 

Check the box to Send to vRealize Operations Manager and specify a failback object. This is the object with which vROps will associate the alert if it cannot find the source. Set the criticality to your liking. Finally, on the Raise an alert panel at the bottom, select the last radio button and choose “more than” with a number less than 20 in the last 5 minutes. Since the PSCs replicate approximately every 30 seconds and two messages match the query every period, the result would produce 20 logged entries in a 5-minute period, so you want to stay under that if your timeline is 5 minutes. Save the query and be sure to enable it.

 

 

 

With the alert configured, saved, and enabled, let’s give it a try and see what happens. On your PSC replica, again stop vmdird with service-control --stop vmdird and wait a couple minutes. Flip over to vROps and go to Alerts.

 

 

Great! The alert fired, got sent to vROps, and even was correctly associated with PSC-01, the PSC from which the errors were observed. And the description and recommendations that were configured for the alert also show.

 

 

Now you’ll have some peace of mind knowing that your replication is working properly, and if not when it occurred. I’m also providing my Log Insight alert definition that you can easily import if you’d prefer not to create your own. So hopefully this makes you sleep a little bit better knowing that you won’t be caught off guard if you need to repoint your PSC only to find out it’s broken.

Introduction

 

     The topic of vSphere upgrades is a hot one with every new release of VMware’s flagship platform. Architectures change, new features are introduced, old ones are removed, and so everyone is scrambling to figure out how to move to the next version and what process they should use. There are generally two approaches when it comes to vSphere upgrades: in-place upgrade or migrate. In the in-place upgrade process, the existing vCenter Server is preserved and transformed into the new version while in the migration method, new resources are provisioned using the new version which then take over from the old resources. Primarily the new resources consist of the vCenter Server and its accouterments while ESXi hosts are simply moved over to it and then upgraded. Therefore, both strategies see ESXi hosts being upgraded in-place. While there are pros and cons to each approach, I want to explore the migration method in particular since this is a question I often get from customers and the community at large. In addition, the in-place upgrade approach is fairly well documented with steps and procedures from VMware while the migration method receives little, if any, attention. Let’s go through the process of the migration method and discuss how it works, what’s involved, and the gotchas of which to be cognizant.

 

Why Migrate?

 

     Upgrading vSphere is no simple task regardless how you go about it. Although VMware has done a good job of making this process easier and more reliable, there are still a number of things you as an engineer or administrator are responsible for doing to ensure it ultimately succeeds. Before deciding if you want to go straight to a migration rather than in-place upgrade, we need to lay out the pros and cons of each. Here’s a table which has the most salient points.

 

In-place Upgrade vs Migrate Pros and Cons

MethodProsCons
In-place Upgrade
  • Preserves config and data
  • Can be quicker
  • Some solutions carry over
  • No reconfig of external apps
  • Convenient utility for vCSA
  • Preserves unoptimal config
  • Config has legacy settings
  • Higher risk of failure
  • Future risk of breaking
  • Can’t change architecture
Migrate
  • New config from scratch
  • Best practice settings default
  • More controlled process
  • Less risk of failure
  • High chance of future upgrade readiness
  • Ability to change architecture
  • More planning time
  • Manual work moving items
  • Lose historical data
  • Must reconfigure apps

 

The in-place upgrade has advantages like preserving performance data because the vCenter database is kept intact. Since it’s the same vCenter, the identity is carried forward as are all the settings. It can sometimes be quicker to upgrade since you’re not standing up a new vCenter, and if you’re moving from Windows to the appliance there’s a handy migration utility that streamlines this process. Lastly, any solutions or other third-party applications you have which rely on vCenter continue to work (if they’re compatible).

 

However, there are some serious drawbacks to consider as well. Going with an in-place upgrade means settings which may not be optimal on the new version are carried forward rather than altered. In preserving the configurations, you may also be moving things along which were mistakes or not according to best practice to begin with. There’s a much higher risk of failure due to things like database issues, which are rampant, underlying OS issues, and the fact that in any enterprise software development, the majority of the efforts in QA are focused on testing net new deployments. It’s only understandable that vendors focus on predictable deployment patterns rather than trying to model millions of possible permutations of different versions crossed with different settings—it’s a matrix from hell. An in-place upgrade has a higher risk of breaking as future patches and updates are made to the software then scabbed on. A combination of legacy settings and non-optimizations create somewhat of a ticking time bomb for any further updates owing again to the possibilities when in the developing and test phases. And last, an in-place upgrade won’t allow you to change your vCenter architecture. It’s very common to see vCenter deployments that, due to time, budget, personnel, or other constraints were slung together and not well planned and thought out. Perhaps the architecture was wrong on day one, or maybe your company has simply grown organically or through investitures and you now find the need for multiple vCenters and a more resilient architecture. In-place upgrades don’t allow you to change what you have, merely stand pat and bump up to the latest release.

 

When it comes to the migration path, you still have some negatives that should be understood. In a migration, since this is a new vCenter, there’s more planning that is involved as you understand dependencies and port elements over. This translates to more time spent on the overall upgrade process. And because this is a lift-and-shift operation, you’ll lose historical data in the vCenter database as well as be required to repoint any external applications that talk to vCenter. More on all this in the Moving to Migrate section.

 

The positive aspects of a migration as opposed to an in-place upgrade are extremely compelling, however. This is a fresh, clean slate, so you have the opportunity to right past wrongs, fix non-optimal settings, and conform to best practices without having to worry about transporting and then readopting a bunch of junk from prior versions. If your present vCenter environment has been upgraded from at least one major version in the past (for example, from vSphere 4.1 to vSphere 5.0), this is usually a clear signal to break with in-place upgrades and opt for a migration. The migration process is much more controlled and so you can take the time to be thorough and fix issues as they arise without worrying about downtime. The risk of failure is very slight because everything is new and fresh so no worrying about database corruption or rogue tables killing your upgrade. Since this is essentially a new environment, future patches and upgrades are much more likely to go without incident because you are on a common, known-good platform. And, lastly, you can learn from prior mistakes, assess the needs of your company, and correct upon earlier architectures by designing a new one and putting it into play. When the time comes and you’re satisfied, you can then begin to bring things over piece by piece until the legacy environment is entirely vacant and deprecated, then dispose of it forever.

 

  Weigh each option carefully to determine if the pros column outweighs the cons column in your case. And for some, an in-place upgrade is the only possibility due to a variety of reasons. However, keep in mind the ultimate goal with any upgrade is not only to satisfy the primary objective of moving to the later version, but to ensure the platform remains stable, reliable, secure, and performant. Pursuant to those goals, it has been my experience that a lift-and-shift migrate, while having some leg work involved, ultimately produces the best result in the long run and sets you up for a more stable vSphere.

 

Moving to Migrate

 

     In a vSphere migration process, there are three large steps that occur and, while they sound simple, are actually complex in the implications that arise from such a movement.

 

  1. Stand up new vCenter on new version
  2. Move ESXi hosts to new vCenter
  3. Upgrade ESXi hosts

 

Leading up to these steps is much planning in figuring out how exactly to do this. The devil, as they say, is in the details. Because this is essentially a new vCenter infrastructure design, we have the opportunity to adjust what might not have worked so well in the past and adopt a clean and new architecture that better suits our needs. Some questions to ask and then answer include:

 

  1. What type of vCenter platform will I use?
  2. What will the size of my inventory be?
  3. How will this grow in the foreseeable future?
  4. Will I use an external PSC?
  5. Do I need to link additional vCenter?

 

Obviously, the answers to these questions will be specific to your needs and that of the business and so are out of scope for this particular article, with one exception being the vCenter platform. Because Windows-based vCenters are going away, the appliance should be the only thing on your radar. The point being that you are planning for a greenfield deployment as if your existing datacenter was a new one entirely. Once you’ve settled on a vCenter architecture, we have to get from the current state to the new state. This is where the next batch of planning comes in. Because of the complexities of vCenter and the various features it enables (which you may be using), there are a whole host of things that must be moved and due diligence done before swinging hosts. An exhaustive list is not possible, but here are the 10 major things you should check and plan to either move or recreate. Keep in mind that although this list is tailored towards a migration, several items are universal irrespective of which upgrade method you elect.

 

10 Things to Check Before Migrating

 

1. Custom roles and permissions

Any roles you’ve cloned and customized in your existing vCenter will not be moved with hosts and so must be recreated. Also, if you’ve applied those custom roles to specific objects in the vCenter hierarchy, those will need to be documented and recreated. Even if not using custom roles, existing out-of-the-box roles that are applied at granular levels inside vCenter will need to be recreated.

 

2. Distributed Switch

The vDS is a vCenter-only construct and will have to be dealt with first. While you can backup and restore that vDS via the web client, hosts will have to be migrated to a vSS first before vCenter will allow you to disconnect them. This is a topic unto itself, but you will need two uplinks as a minimum to perform such a migration as well as some careful planning. It can be done with VMs online, but the point being you have to get to a vSS first, then reverse the process later.

 

3. Folders, Resource Pools, Compute/Datastore Clusters

Once again, these are all vCenter constructs and will not follow the hosts. Any vSphere folders, resource pools, compute or datastore clusters will need to be recreated on the destination. Other vCenter-specific resources include storage policies, customization specs, host profiles, vSphere tags, DRS rules, and licenses. While some of these objects have native, GUI-driven exportation abilities like host profiles as shown below, others like vSphere folders will require you drop down to PowerCLI and do some scripting. In most cases, there are existing PowerShell scripts you can leverage to help, but you’ll need to consider these before swinging hosts.

4. ESXi version compatibility

In vSphere 6.5, for example, vCenter 6.5 cannot manage hosts below 5.5 and so before committing to this process, you need to ensure the existing ESXi hosts will support being connected to the next version of vCenter prior to them being upgraded.

 

5. Hardware support (compute, storage, network)

Further to #4, you must check your hosts, storage, and network against the HCL to ensure they will support being upgraded to the target new version. This is something that is overlooked far too often and leads to major issues. Vendors are the ones who usually do compatibility testing on their platforms, and so not all servers will support the latest version. In order to be in a safe place if you need support, all hardware must be validated against the HCL. Also, don’t forget about your physical network and storage equipment. These must be validated every bit as much as your ESXi hosts.

 

6. Firmware updates

And further to #5 is the matter of firmware updates for the said physical equipment. Although you may have validated that your servers and storage are indeed supported with the latest version of vSphere, they may not be running a compatible or supported version of the underlying firmware. This can be critically important if you wish to avoid outages and instability in your vSphere platform. Every piece of hardware on the HCL contains corresponding validated firmware that forms the support statement.

 

For example, in the image above you can see that the HP Lefthand storage array must have at least SANiQ 12.5 if using the be2iscsi driver to be compatible with ESXi 6.5 U1. Other drivers, which depend on the network adapter in use, may have higher requirements. You must take care to ensure that all combinations of hardware have been validated against the HCL and work with various teams internally to come to an understanding on what, if any, upgrades are necessary prior to upgrading ESXi.

 

7. vSAN, NSX, and other VMware solutions

This is a very broad topic, but if you’re running vSAN or NSX then there are specific validation that must take place there. Any other VMware solutions you may have such as vROps, vRA, SRM, Log Insight, Infrastructure Navigator, Horizon View, etc. must all be checked for their individual levels of support and interoperation with the new version. Use the Interoperability Matrix to check these solutions, and then use the KB for proper upgrade order in the case of vSphere 6.5. For example, if you are using NSX, you may need to upgrade it before you perform the migration. Also, while not so much a concern any longer since vCenter 6.5 now has it baked in, is Update Manager. Some shops are very particular about their VUM installations. This is something else you must leave in the dust, so make preparations to migrate any builds, patches, and baselines to the new vCenter. Lastly, if using Auto Deploy then you’ll want to take that into consideration as well since it has some special requirements.

 

8. Plug-ins

Also a broad topic but any third-party plug-ins you might have, for example with your storage vendor, will need to be validated, possibly upgraded, then migrated or reregistered against the new vCenter. Check vCenter for a list of these under Administration -> Solutions heading at Client Plug-Ins and vCenter Server Extensions. For deeper insight into what is registered and where it is, see William Lam’s article on using the vCenter MOB. Check with each respective vendor to figure out what that process may be and if you’ll need to perform any sort of backup or restore procedure for the data that may have been created or managed by those plug-ins.

 

9. SEAT data

Stats, Events, Alarms, and Tasks (SEAT) data will be left behind in your existing vCenter because this is all stored in the database and does not travel with the hosts. Stats are the performance statistics when you open the performance charts on an object. Events are any event on any object accessible from the Tasks and Events pane. Alarms are any existing, active alarms as well as those you have customized plus those created automatically by other solutions or plug-ins. Tasks are any records of activities performed manually or programmatically and serve as an audit log. If you’re using something like vROps, most of this information will be preserved there, but if not, be cognizant that you must give this up once hosts are swung.

 

10. Backup, replication, and monitoring

Very important and often overlooked. Special applications such as backup, replication, and monitoring will need to be validated for support and functionality, but will also need to be reconfigured or updated once the resources for which they are responsible are moved elsewhere. vCenter tracks objects by several internal IDs, the main one being the MoRef ID (Managed Object Reference). This tracking system assigns a unique ID to each VM, host, folder, etc., and it is very often this ID that such applications key off of when associating their inventories. For example, in the case of Veeam Backup & Replication, when swinging hosts and their VMs over to a new vCenter, each object will have a new MoRef generated for it. If you merely reconfigure the jobs to point to the new vCenter, Veeam will see new IDs and therefore think they are brand new VMs even though they’re actually the same. Veeam has address this challenge specifically in a KB, but you’ll want to understand what will happen in this case and how your monitoring or replication applications will behave. Between points #6 and #10 here are the biggest and most complex things to investigate and can make or break if a migration is right for you. Anything and everything that talks to or through vCenter Server must be accounted for, documented, and investigated.

 

Resources and Links

 

     I’ve covered lots of different material and provided several links, but I want to list the most important ones you can use as reference material when deciding on an upgrade path. Let these links be your guiding star and read them thoroughly and carefully. While several are for vSphere 6.5, they are generic documents that are updated with each major release.

 

Also included are release notes to the latest versions of vSphere as of the time of writing. Something that people rarely do is read release notes and instead plunge head first into an upgrade/migration. I can’t stress enough the importance of reading and then re-reading release notes. Bookmark them and check back frequently when planning your path because VMware always updates them as new issues are discovered and workarounds found.

 

VMware Compatibility Guide

VMware Product Interoperability Matrices

Update sequence for vSphere 6.5

vSphere 6.5 Upgrade Documentation

Best practices for upgrading to vCenter Server 6.5

vCenter 6.5 U1 Release Notes

ESXi 6.5 U1 Release Note

vSphere 6.5 was released at the end of 2016 and so, at this point, has been on the market for about a year. VMware introduced several new features in vSphere 6.5, and several of them are very, very useful however sometimes people don’t take the time to really read and understand these new features to solve problems that might already exist. One such feature that I’d like to focus on today is the new HA feature called Orchestrated Restarts. In prior releases, vSphere High Availability (HA) has served to reliably restart VMs on available hosts should one host fail. It does this by building a manifest of protected VMs and, through a master-slave relationship structure, makes those manifests known to other cluster members. Fun fact that I’ve used in interviews when assessing another’s VMware skill set is HA does not require vCenter for its operation although it does for the setup. In other words, HA is able to restart VMs from a failed host even if vCenter is unavailable for any reason.  The gap with HA, until vSphere 6.5 that is, is it has no knowledge of the VMs it is restarting as far as their interdependencies are concerned. So, in some situations, HA may restart a VM that has a dependency upon another VM which results in application unavailability when all return to service. In vSphere 6.5, VMware addressed this need with a new enhancement to HA called Orchestrated Restarts in which you can declare those dependencies and their order so HA restarts VMs in the necessary sequence. This feature is imminently useful in multi-tier applications, and one such application that can benefit tremendously is vRealize Automation. In this article, I’ll walk through this feature and illustrate how you can leverage it to increase availability of vRA in the face of disaster in addition to covering a couple other best practices with vSphere features when dealing with similar stacks.

 

              In prior versions of HA, there was no dependency awareness—HA just restarted any and all VMs it knew about in any order. The focus was on making them power on and that’s it. There were (and still are) restart priorities which can be set, but not a chain. In vSphere 6.5, this changed with Orchestrated Restarts.

 

 

With special rules set in the web client, we can determine the order in which power-ons should occur. First, let’s look at a common vRA architecture. These are the machines present.

 

 

We’ve got a couple front-end servers (App), manager and web roles (IaaS), a vSphere Agent server (Agent), and a couple of DEM workers (DEM). The front-end servers have to be available before anything else is, followed by IaaS, and then the rest. So, effectively, we have a 3-tier structure.

 

 

And the dependencies are in this order, so therefore App must be available before IaaS, and IaaS must be available before Agent or DEM.

 

Going back over to vCenter, we have to first create our groups or tiers. From the Hosts and Clusters view, click the cluster object, then Configure, and go down to VM/Host Groups.

 

 

We’ll add a new group and put the App servers in them.

 

 

And do the same for the other tiers with the third tier having three VMs. It should end up looking like the following.

 

 

Now that you have those tiers, go down to VM/Host Rules beneath it. Here is where the new feature resides. In the past, there was just affinity, anti-affinity, and host pinning. In 6.5, there is an additional option now called “Virtual Machines to Virtual Machines.”

 

 

This is the rule type we want to leverage, so we’ll create a new rule based on this and select the first two tiers.

 

 

This rule says anything in vRA-Tier1 must be restarted before anything in vRA-Tier2 in the case where a host failure takes out members from both groups. Now we repeat the process for tiers 2 and 3. Once complete, you should have at least two rules in place, possibly more if you’re following these instructions for another application.

 

After they’ve been saved, you should see tasks that kick off that indicate these rules are being populated on the underlying ESXi hosts.

 

In my case, I’m running vSAN and since vSAN and HA are very closely coupled, the HA rules serve as vSAN cluster updates as well. And by the way, here is another opportunity we have to exercise best practice with a distributed or enterprise vRealize Automation stack. We need to ensure machines of like tier are separated to increase availability. This is also done here and we need to specify some anti-affinity rules to keep the App servers apart as well as the IaaS servers and others. My full complement of rules, both group dependency based and anti-affinity, looks like so.

 

 

Now we have the VM groups and the orchestration rules, let’s configure a couple other important points to make this stack function better. In vRA, the front-end (café) appliance(s) usually take some time to boot up because of the number of services that are involved. This process, even with a well-performing infrastructure can still take several minutes to complete, so we should complement these orchestrated restart rules with a delay that’ll properly allow the front-end to start up before attempting to start other tiers. After all, there’s no point starting other tiers if they have to be restarted manually later because the first tier isn’t yet ready for action.

 

Let’s go down to VM Overrides and add a couple rules. This is something else that’s great about vSphere 6.5, the ability to fine-tune how HA restarts VMs based on conditions. Add a new rule and put both App servers in there.

 

 

Three key things we want to change. First, the VM restart priority. By default, an HA cluster has a Medium restart priority where everything is of equal weight. We want to change the front-end appliances to be a bit higher than that because this serves as the login portal, so HA needs to make haste when prioritizing resources to start VMs elsewhere. Next, the “start next priority VMs when” setting allows us to instruct HA when to being starting VMs in the next rule. There are a few options here.

 

 

The default in the cluster unless it’s overridden is “Resources allocated” which simply means as soon as the scheduler has powered it on—basically immediately. Powered On is waiting for confirmation that the VM was actually powered on rather than just attempted. But the one that’s extremely helpful here is what I’d suggest setting which is “Guest Heartbeats detected.” This setting allows ESXi to listen for heartbeats from VMware tools, which is usually a good indicator that the VM in question has reached a suitable run state for its applications to initialize.

 

Then back to the list, an additional delay of 120 seconds will further allow the front-end to let services start before attempting to start any IaaS servers. If, after this custom value, guest heartbeats are still not detected, a timeout will occur and, afterwards, other VMs will be started. Extremely helpful in these situations when you need all members to come up, even at the expense of pieces maybe needing to be rebooted again. Rinse and repeat for your second tier containing your IaaS layer. Using the same settings as the front-end tier is just fine.

 

Great! Now the only thing left is to test. I’ll kill a host in my lab to see what happens. Obviously, you may not want to do this in a production situation

 

I killed the first host (10.10.40.246) at 7:55pm that was running App01, IaaS01, and Agent01. Here’s the state before.

 

 

Now after the host has died and vCenter acknowledges that, the membership looks like the following.

 

 

Those VMs show disconnected with an unknown status. Let’s see how HA behaves.

 

 

Ok, good, so at 8:00pm it started up App01 as it should have once vSAN updated its cluster status. Normally this failover is a bit quicker when not using vSAN.

 

Next, when guest heartbeats were detected, the 2-minute countdown started.

 

 

So at 8:04, it then started up IaaS01 followed by Agent01 similarly. After a few minutes, the stack is back up and functional.

 

 

Pretty great enhancements in vSphere 6.5 related to availability if you ask me, and all these features are extremely handy when running vRA on top.

 

I hope this has been a useful illustration on a couple of the new features in vSphere 6.5 and how you can leverage those to provide even greater availability to vRealize Automation. Go forward and use these features anywhere you have application dependencies, and if you aren’t on vSphere 6.5 yet, start planning for it now!

For those who use vRealize Automation (vRA), you’re probably all too familiar with vSphere templates and how they are the crux of your service catalog. Unless you’re creating machine deployments anew from another build tool, you’re probably using them, and it’s likely you have at least two, sometimes many more. Using the Clone workflow, those vSphere templates become the new VMs in your deployments. That part is all well and good, but do you fall into the category of having several templates? How about multiple data centers across multiple geos? It becomes a real chore when patching time comes around. It’s bad enough having to convert, power on, update, power down, and convert again two templates let alone a dozen only to be forced to multiply that work times the number of sites you have. So, in the words of the dozens of infomercial actors out there hawking questionably useful gadgets and gizmos…

I’m glad to share with you in this new article that there is indeed a better way if you happen to be using Veeam Backup & Replication. And the best part is this won’t even cost you so much as one easy payment of $19.95. Read on for the best thing since the Slumber Sleeve.

 

     Let’s start off with a basic scenario:  You have two different data centers, each managed by a different vCenter with Veeam Backup & Replication able to reach both of them. There is at least one Veeam proxy per site. One site is considered the “primary” site while the other is “secondary.” Your templates are updated only on the primary site but you wish them to be available at the secondary site as well. vRA has endpoints set up for both vCenters with reservations and blueprints created for both locations. This is a very common scenario and can be achieved here, not to mention replication to any N other sites.

 

As the scenario was described, make sure you do have those prerequisites met. If you don’t yet have a second endpoint in vRA with blueprints for the second location, that’s not a problem. But do ensure Veeam is operational, has proxies at both sites, and that those vCenters have been added to Veeam’s Managed Servers inventory list. Also, PowerCLI will be necessary on the Veeam management server. This process as well as the scripts I’ll provide were developed and tested with PowerCLI 6.5.2 against vCenter 6.5 U1 and Veeam 9.5 U2, but presumably they should work on earlier versions of each as well.

 

     To start this process off, we need to make sure there is some level of organization in vCenter on both sites. Place all your vRA templates into a dedicated folder within the source vCenter. That is to say, any template you want replicated needs to go in one specific folder. I just called mine “vRA Templates” and it is a subfolder of “Templates” because I have others that are not for vRA’s use.

Pretty simple with one Windows and one Linux. Also, keep in mind that these templates have the guest/software agent installed on them, and it can only be pointed to one vRA environment, so if you’re thinking of replicating them to another data center for another vRA’s use, you might need to use another method. In the second site, create a folder of the same name.

 

Now, in that second site and within that folder, create new VMs but give them the same configuration as your templates. For example, my CentOS 7.2-vRA template you see there is a 1 x 2 x 6 configuration. Create the same configuration in a new VM in the second location. Feel free to give it another name, or append a portion to the name for uniqueness, however this is not required. Join it to a portgroup of your choice keeping in mind vRA will have the ability to change this when deployed. Here’s the thing, though, you do *NOT* need to power it on or load any sort of OS. Just leave it there as a shell of a VM in a powered-off state. Why is this necessary? Because although Veeam is perfectly capable of creating replicas at the destination, we can’t have that in this case due to the instanceUUID value, also known as the vc.uuid. This is the ID by which vCenter tracks different VMs and each is based off of the vCenter unique ID. The instanceUUID is consequently how vRA tracks VMs and so all must be unique even across sites. If we let Veeam create the VM replica at the destination, those IDs would not be unique thus vRA would not know about both of the templates as different objects. By manually going through this creation process once, we can ensure those IDs are unique and Veeam will honor them going forward with replication. With some simple PowerShell, we can verify those instanceUUIDs and keep track of them for later.

 

$vm = Get-VM -Name "MyVMName"

$vm.extensiondata.config.InstanceUUID

 

With the template shells created at the destination, make sure you convert the templates at the source side to VMs. It’s time to set up the Veeam replication job. Again, before doing so, make sure both vCenters are in Veeam’s inventory and you have proxies available in both locations.

 

1.) Create a new replication job. Before navigating away from that first screen, two very important check boxes we must have in place.

Check “Low connection bandwidth” and “Separate virtual networks”.

 

2.) Select the VM folder at the source containing your vRA templates.

 

3.) At the Destination step, select the necessary resources including the folder at the destination containing those shell VMs.

 

4.) On the Network selection, select the portgroup on the source side where those templates are joined, and then the portgroup on the destination side where your shells are joined.

 

5.) On Job Settings, select the repository of your choice. Clear the selection for the replica suffix as we won’t need that (or assign something else). Change restore points to 1 since you must have at least one. In the Advanced Settings option, leave the defaults set ensuring the two options under the Traffic tab are enabled.

6.) Data Transfer, select options to fit your infrastructure.

 

7.) Seeding. This is the key. We’re going to map our replicas to those shells we created earlier at the destination.

We won’t use the initial seeding functionality, only the replica mapping. Click each template and manually map it to those new shells. Don’t rely on the detect functionality as it probably won’t come up with the right systems.

 

8.) Guest Processing. Don’t need it since these will be powered off.

 

9.) Schedule is up to you. Since Veeam won’t replicate anything if no data has changed on these templates, it’s safe to set it to a more frequent interval than that in which you’ll be updating them.

 

10.) Click Finish to save the job.

 

Now, if you were to run the job now, it wouldn’t process anything if you converted those VMs back to templates at the source. The reason being that Veeam can’t replicate templates natively, so they must be converted. A couple scripts I’ll provide you here are for the pre- and post-job processing of the job. The pre job script automatically converts that folder of templates into VMs. Once done and the Veeam job started, it processes them normally. The post-job script reverses the process. In addition to templates needing to be VMs for replication to function, the added benefit is CBT can be leveraged to find and only move the blocks of data that are different. If you’re familiar with backup jobs, you may know that they are capable of backing up templates, but only in NBD mode and without CBT. So this process of template-to-VM conversion and back actually saves time if that data needs to be moved over a WAN to remote sites.

 

     The last thing to do here is to download the pre- and post-job scripts I’ve provided, modify them slightly, and configure them in the advanced job settings options. Both scripts have been made with ease of use and portability in mind. The only things that need setting by you are the username and password options.

 

Edit the PRE script to change the $username and $password variables, and do the same for the POST script. Since this is obviously sensitive information, you’ll want to keep access controlled to these scripts somewhere local to your Veeam server. Once this is done, edit your replication job and go to Job Settings -> Advanced (button) -> Scripts. Check both boxes to run the script at every 1 backup session.

For the path, specify them as the following:

 

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File "D:\Replicate vRA Templates PRE.ps1" -SourcevCenter "source_vC.domain.com" -DestinationvCenter "dest_vC.domain.com" -FolderName "My vRA Templates"

 

Since the scripts accept parameters for source, destination, and folder, we can just pass these in as arguments in the program path.

 

So after getting that plumbed up, you should be ready to run your replication job. The overall process that happens will be something like this.

 

  1. Pre-job script runs. Converts folder of templates at source and destination to virtual machines.
  2. Veeam replication begins. Replicas are mapped. Disk hashing of destination VMs begins.
  3. Data is replicated from source to destination using available proxies.
  4. Replication complete. Retention applied by merging snapshots (if applicable).
  5. Post-job script runs. Checks destination VMs to see if vCenter reports VMware tools as installed. If no, starts VM, waits for tools status, then shuts down. If yes, converts VMs at destination to templates. Converts VMs at source to templates.

 

Step 5 may look a little strange. Why does it care about VMware tools status on a powered-off VM? This is because customizing a VM using a customization spec uses the VMware tools channel to push the configuration. If VMware tools are not installed, customization cannot happen, and even if tools really are installed but vCenter sees they are not, it will still fail. If customization fails, vRA fails, too, since vSphere machines require vCenter customization specs for things like static IPs and domain joins. So the workaround here is to power on the VM until tools starts. At that time, vCenter will pick up on it and change what it has recorded in its configuration to reflect that tools are installed. This is what will allow customization to succeed. Once this status is updated on the VM, the script will then perform a guest OS shutdown (not a power off) followed by a conversion to template.

 

One possible workaround if you would prefer not to have this script power on your destination VM/template every time is to, after the initial replication completes, power on the VM yourself waiting for VMware tools, shut it down, then merge the snapshot that Veeam just created. Doing so will retain the tools status, but upon the next run of the replication job will trigger another disk hashing run. This disk hashing will have to check all the blocks on the destination VM to ensure the data is as it left it before proceeding with another replication cycle. But it is one possibility if, for some reason, you cannot have your templates being brought up due to run-once scripts or configurations getting updated, etc.

 

     There you go. You now have your templates from your source site at your destination site ready to consume with vRA. Because we performed replica mapping, they should be discrete instances of templates, even if the names are the same, due to different instanceUuids. One last thing is to validate from vRA’s side that these templates are, in fact, separate and we can consume them independently. With IaaS Administrator permissions, login to vRA and go to Infrastructure -> Compute Resources -> Compute Resources. Hover over the compute resource corresponding to your remote site and perform a data collection.

 

 

Request an Inventory data collection.

 

 

Once successful, either create a new blueprint or edit an existing one. Click on your vSphere machine object on the canvas, go to the Build Information tab, and click the button next to the Clone from field. You should now be able to see templates from both sites in the list and eligible for selection on your blueprints.

 

 

Now all that’s left for you to do is consume them in vRA any way you wish!

 

What if you have multiple sites and want to do a one-to-many replication? That’s no problem either. Simply duplicate the process for the second vCenter in a new replication job, and for the scheduling portion, select the “After this job” option and pick the first replica job you created. Also be sure to edit the arguments of the pre- and post-job script configurations so it reflects the correct destination vCenter. Remember, you’ll still need to create one-time “shell” VMs at your other sites and store them in a consistent folder. You can chain as many of these together as you want and vRA will be able to see them independently.

 

     As you can see, replication of your vRA templates can be done fairly easily with the provided scripts and allows great benefits in the form of time savings and consistency. No more having to manually patch and update the same template in multiple sites. No more user error involved in forgetting to do something on one site that you did in another. And no more failing audits because your corporate, hardened template in your other site didn’t get the new security settings. Now you too can be like one of the millions of satisfied customers with your handy-dandy vRA Template Replicator!

 

 

I wish to give special thanks to Luc Dekens for his scripting expertise and assistance in this project. He was very generous in providing some code snippets and reviewing the process of the precursors to the scripts provided here, and he frequents the PowerCLI forum on the VMTN Community where helping others with their PowerShell scripting challenges. Thank you, Luc!

Although vRealize Operations Manager (vROps) is a great tool for a variety of reasons, one of the more useful abilities it brings to a datacenter is the ability to look inside the guest OS and tell you about its state. Back in 6.2 or maybe 6.3, VMware added the ability to look at guest OS metrics. One of the things they added was guest file system stats on a per-drive basis. The available options as of vROps 6.6.1 look like this.

 

There is one that displays the usage in percentage, and two that show total capacity and usage in units of GB. What’s absent from this list is a metric that shows remaining disk space in terms of capacity (GB). Curiously, though, this metric was present back in 6.3 but has since been, for whatever odd reason, removed from later releases.

 

            Now, as far as alerting goes, there is already a preexisting alert in vROps that uses the Guest File System Usage (%) metric to inform you on a per-drive or file system basis when capacity is running low. This alert definition is called “One or more virtual machine guest file systems are running out of disk space” and is based on symptoms of a percentage starting at 85% and going to 95%. And this works perfectly well as you can see from the triggered alert below.

 

Looks good, right? What’s the problem with that? It’s not a problem per-se, but a limitation on the flexibility of the alerting. Consider this scenario for a moment: You have a number of file server VMs with drives in the multi-terabyte range. Space isn’t consumed on a terribly rapid basis, but you still want to know when you’re running low. If you have a, say, 5 TB drive, using the default alert I showed above at the critical level of 95% capacity still leaves you with over 250 GB free. Probably not deathly critical if you’re not consuming space in a rapid manner. What would be better in these cases is to craft an alert based on arbitrary remaining capacity figures rather than remaining percentage. Unfortunately, vROps no longer has the metric necessary for us to make this happen. Happily, though, we can get this ability back through the use of super metrics. Read on to learn how restore this ability.

 

            The goal here is to use the super metrics ability in vROps to create a new metric that tells us on a per-drive basis the remaining capacity in GB. Using that new metric, we can create new symptoms, and using those symptoms we can create a new alert. Although this is a mildly-involved process, you fortunately only need to endure it once and never again regardless of how many different drives you might eventually have on systems. In order to carry out these steps, we need a few prerequisites to move forward.

 

  1. A disposable Windows VM. Can be almost any OS. Best not to use an existing VM that you care about, although these steps should be perfectly safe if you can’t create a new VM from template.
  2. Latest VMware Tools installed on test VM.
  3. vRealize Operations Manager 6.6 or better.
  4. A Windows-based workstation with PowerCLI 6.5.2 installed. Earlier versions should work but have not been tested.

 

In this tutorial, I have a disposable VM called “space-test” which I’ll use to set this up.

 

 

I am providing you with a few resources in this blog to make the process easier including a pre-constructed super metric that needs to be imported, and two Powershell scripts which can be used to add some test drives to your VM and format them inside Windows. Here’s the overall process we’ll follow.

 

  1. Deploy test Windows VM. Use PowerCLI script to add drives, and use Powershell script inside the guest to format them (both scripts provided).
  2. Import super metric provided here.
  3. Duplicate super metric enough times to cover all possible drive letters.
  4. Activate super metrics in your policy.
  5. Create a symptom definition using these super metrics.
  6. Create an alert using the new symptom definition.

 

  I’ve already deployed the test Windows VM, so now we need to add drives. The idea with this is to add a drive for every letter in the alphabet that covers all possibilities you might encounter. Since A, B, and C are reserved, and you’ll likely have D as a virtual optical disk drive, we just need the rest of the letters. A simple PowerCLI script I’m providing you does all this in just a couple of lines.

 

Connect-VIServer vcenter.domain.com -User myuser@domain.com -Password 'VMware1!'

$vm = Get-VM -Name myvmname

ForEach ($HardDisk in (1..22))

{

New-HardDisk -VM $vm -CapacityGB 1 -StorageFormat Thin | New-ScsiController

      }

 

 

Be sure to edit the script accordingly to connect to your vCenter and provide your VM name. Also, ignore any errors you might warning that the VM must be in a powered off state as they’re false. If this is run successfully, you should have a bunch of 1 GB drives created on your test VM. After that, run the provided PS script inside the guest to bring them online and format them.

 

 

Update-HostStorageCache

$disknum = (Get-Disk | Where partitionstyle -eq 'raw').Number

Foreach ($stuff in $disknum){

Initialize-Disk -Number $stuff -PartitionStyle MBR -PassThru |

New-Partition -AssignDriveLetter -UseMaximumSize |

Format-Volume -FileSystem NTFS -Confirm:$false

  }

 

 

Once complete, you should see this madness.

 

 

Go into vROps under Administration -> Configuration -> Super Metrics and import the JSON file provided for you.

 

The result should be this.

 

 

Next, duplicate that super metric for as many drive letters as you have (not counting D: if that’s your CD-ROM drive) and change the metric slightly so it references the drive letter. (Unfortunately, I don’t have a script that does this part for you.)

 

After creating the metrics, add the Virtual Machine as the object type.

 

 

Go into your policy and activate these new super metrics. Save the policy.

After a couple collection cycles, you should see these new super metrics working and displaying data on your test VM. You can create a dummy file on a drive that occupies space to see if the super metric is reporting correctly. Something like fsutil file createnew E:\200mb 209715200 will get the job done.

 

The high value seen above is before the 200 MB file was created, and the low value is after, proving the super metric is collecting accurately.

 

            Next, we have to create a new symptom definition with these super metrics. Navigate to Alerts -> Alert Settings -> Symptom Definitions. In the metric explorer, change the object type to Virtual Machine. Click the strange button to the right of the “Metrics” drop-down menu and search for then select your test VM. The reason for this (and the whole reason why a test VM was necessary in the first place) is due to a rather unfortunate limitation in vROps to date, and that is you can’t select metrics from which to create a symptom definition just by virtue of them existing in the system. They have to be “active” on an object. If you were to expand the super metrics list there prior to selecting your test VM, you wouldn’t find them. Now that you’ve selected it, however, they appear. Drag-and-drop each of your super metrics into the symptom definition window. Change the definition to suit your needs.

 

 

If you expand the Advanced section, you’ll see by default the wait and cancel cycles are set to 3. When complete, save your symptom definition. Once saved, you’ll notice there are individual symptom definitions created for each one. This is intended and rather nice since we don’t have to create them individually ourselves.

 

We’re almost done. At this point, we have to create a new alert definition that will make use of these symptoms. Walk through the new alert creation wizard giving it a name, description, selecting the base object type of Virtual Machine, setting an alert impact, and then adding symptom definitions. Drag each of them onto the definition so they’re stacked together and not separately. Change the evaluator to be “any” as shown in my example.

 

This will ensure that you only need a single alert definition to catch any of those symptoms that become true. Once saved, it’s time to test it out. Since I created my symptom definition with the condition that those super metrics need to be less than 5 GB space remaining, I have to increase the size of a drive. After that, fill it up with some test stuff.

 

Wait a few collection cycles and check to see if the alert fires.

 

 

And there we go, we have an active alert on our test system. Since it’s a regular alert, we can do anything we want with it including sending to a CMDB, dispatching an email, generating a web hook, whatever.

 

            Although this process is, shall we say, uncouth (due to VMware’s redaction efforts), we only have to endure the pain once and all future Windows systems will be covered. But there you have it, we have a good use case for super metrics and have plumbed them all the way through from simple metric to functional alert. Now it’ll be easier for you to integrate this into your process and hopefully keep a closer eye on those drives in the future.

 

P.S. - The two scripts and JSON for the super metric definition are all available on my GitHub repo here.