vCAC Provisioning, vCO, and Configuration Management

I would like feedback on our current approach to deal with the following scenario. In vCAC, we must support provisioning 3 versions of Windows Server and 2 versions of RHEL (these VMs are provisioned by cloning OS/version-specific templates). These VMs must have a number of software agents installed on them for antivirus, backups, monitoring, asset management, etc., some of which require subsequent configuration via their associated management servers. In our current automated provisioning approach we split all required tasks into 3 phases like so:

  1. Pre-provisioning - In the vCAC building machine stub, we have calls to vCO workflows to: pre-create each VM in AD (so that it'll be in a specific container when it joins the domain), and to obtain its static IP and create DNS records via Infoblox.
    • The approach for this phase seems reasonable to us. Any dissenting or alternative comments?
  2. Provisioning - In each template, the guest customization spec is used to join the computer to the domain, run an embedded script that installs multiple software agents (the installers are also embedded in the templates), and perform OS updates/patching.
    • We are considering the use of a Configuration Management tool, such as Puppet, to manage these servers after they are provisioned. If we adopt Puppet (or a similar tool), would it be advantageous to change our provisioning process to leverage Puppet too (assuming we have the time/ability to do so)? For example, we could remove all of the agent installers and OS patching from this phase and just install the Puppet agent. Then, we could let Puppet do agent installs/configuration and OS patching during the post-provisioning phase. This has appeal from a consistency perspective: if we intend to manage servers with Puppet, why not leverage it as early as possible in the VM lifecycle?
    • Alternatively, we could have vCO workflows do agent installs/configuration and OS patching during the post-provisioning phase.
    • One of my concerns in the this phase is whether we are cramming too much into the templates and their customization specs given the possible alternatives. Thoughts?
  3. Post-provisioning - In the vCAC machine provisioned stub, we have calls to vCO workflows to: call various management servers to configure agents that require it, call an internal system to do a vulnerability scan of the VM, create a record in our CMDB, and to send email notifications to internal support groups.
    • The approach for this phase also seems reasonable to us. Any dissenting or alternative comments?

All comments from the community are welcome. Thanks.

0 Kudos
2 Replies
Hot Shot
Hot Shot

I think you're right on track. The important part of the provisioning step is knowing when to use vCAC/vCO and when to use something else. Puppet would be a great choice as has a lot of support with vCO and Application services. That being said, check out all the other configuration management tools out there to make sure Puppet is actually going to be the right tool for the job. I actually use GPOs to do a lot of my agent installs for Windows. Also look at using Application Services to do the blueprints for agent installs and so forth so you can standardize those just like you do with vCAC blueprints. While vCO can do a lot even within an OS it is not really the right tool for the job. Use vCO as the director of all the different tools like you are using. Call out to AD, CMDB, Configuration Management, storage or network resources etc. as you are already doing. Your concern is very valid about putting too much into templates and custom specs and your intuition to move to using config management tools is spot on. Best of luck.


We are pretty much doing the exact same thing.  However, we are not looking at using Puppet for OS deployment...  we are going to rely on Guest Customization for a little while (though, Linux has more post provision scripts for network setup and the like).  it is a good idea to consider Puppet, however.

One thing we battle with is how far along the pipeline we consider a successful Day 1 provisioned item, recognizing that a failure of something like "backup agent install" could invalidate and revert/destroy the provisioned server.  With some agents/configs, it may be survivable to allow the machine to be considered provisioned and then enact other "policy based" actions in VCO (vRO) if a failure occurred.  This is something we are looking into now.

Even though we have a CMDB, we decided to *also* write all the VCAC metadata into a table during provisioning.  (we created a Postgres DB and table for the info)  It is a makeshift CMDB that contains all the current data about every machine and historical data about past machines based on an Active/Inactive flag.  Machine Provisioned writes the data record with an Active Flag. Machine Disposing sets the flag Inactive. We then have a daily workflow that validates the current data for every managed VCAC:VirtualMachine in the environment.  We have a one year archive process on the table. I think it is important to keep an offline data record for our own nefarious purposes.  That might be an idea you would want to consider and it is easy to implement.  (the VCO Team has a blog out there I believe)

As far as your emails go, we have found it best to create our own email processes in VCO instead of relying on the VCAC notifications.  We created a single workflow (which could be an action) to accept email parameters and data and format it the same manner every time.

All I can think of for now... from my perspective, you guys are making good decisions.