I would like feedback on our current approach to deal with the following scenario. In vCAC, we must support provisioning 3 versions of Windows Server and 2 versions of RHEL (these VMs are provisioned by cloning OS/version-specific templates). These VMs must have a number of software agents installed on them for antivirus, backups, monitoring, asset management, etc., some of which require subsequent configuration via their associated management servers. In our current automated provisioning approach we split all required tasks into 3 phases like so:
All comments from the community are welcome. Thanks.
I think you're right on track. The important part of the provisioning step is knowing when to use vCAC/vCO and when to use something else. Puppet would be a great choice as has a lot of support with vCO and Application services. That being said, check out all the other configuration management tools out there to make sure Puppet is actually going to be the right tool for the job. I actually use GPOs to do a lot of my agent installs for Windows. Also look at using Application Services to do the blueprints for agent installs and so forth so you can standardize those just like you do with vCAC blueprints. While vCO can do a lot even within an OS it is not really the right tool for the job. Use vCO as the director of all the different tools like you are using. Call out to AD, CMDB, Configuration Management, storage or network resources etc. as you are already doing. Your concern is very valid about putting too much into templates and custom specs and your intuition to move to using config management tools is spot on. Best of luck.
We are pretty much doing the exact same thing. However, we are not looking at using Puppet for OS deployment... we are going to rely on Guest Customization for a little while (though, Linux has more post provision scripts for network setup and the like). it is a good idea to consider Puppet, however.
One thing we battle with is how far along the pipeline we consider a successful Day 1 provisioned item, recognizing that a failure of something like "backup agent install" could invalidate and revert/destroy the provisioned server. With some agents/configs, it may be survivable to allow the machine to be considered provisioned and then enact other "policy based" actions in VCO (vRO) if a failure occurred. This is something we are looking into now.
Even though we have a CMDB, we decided to *also* write all the VCAC metadata into a table during provisioning. (we created a Postgres DB and table for the info) It is a makeshift CMDB that contains all the current data about every machine and historical data about past machines based on an Active/Inactive flag. Machine Provisioned writes the data record with an Active Flag. Machine Disposing sets the flag Inactive. We then have a daily workflow that validates the current data for every managed VCAC:VirtualMachine in the environment. We have a one year archive process on the table. I think it is important to keep an offline data record for our own nefarious purposes. That might be an idea you would want to consider and it is easy to implement. (the VCO Team has a blog out there I believe)
As far as your emails go, we have found it best to create our own email processes in VCO instead of relying on the VCAC notifications. We created a single workflow (which could be an action) to accept email parameters and data and format it the same manner every time.
All I can think of for now... from my perspective, you guys are making good decisions.