Provisioning Systems - making sense of it all and tying it together

Hi, we're a medium sized fairly autonomous dept (20 ESX hosts, 300 Linux/Windows VMs.

We can decide on our vm and physical system provisioning procedures - (sometimes too much freedom and options can be a confounding thing :)!

Currently for VMs we have golden master templates which we clone - the specs for what went into the template are documented in a wiki

Recently we started specifying Linux VMs with Cobbler and kickstart - the VMs turn out pretty quickly (in about 7 minutes) (speed of cloning a VM template had always been held up as a benefit over kickstart - but that no longer seems true)

I'd like to see us tie together the most common provisioning options (Linux/Windows x VM/physical) in a tightly defined process with auditability and revision control for the specs (eg check them into SVN)

What have other folks discovered as a good system for tying provisioning all together and what rationale would you use for determining when cloning is more appropriate than cobbler&kickstarting a new system? (we mostly provision web and app and occasionally DB Linux VMs for customers)

Is Orchestrator capable of this? (I like the idea of a provisioning portal to front our private cloud(s) one day soon)

Cobbler: https://fedorahosted.org/cobbler/ works well for Linux - is there a similar system for Windows?

Part of the reason our dept IT exists is for providing domain knowledge value to customize solutions over the basic PaaS (Platform as a Service provisioning portal) - but I see tremendous value in establishing the PaaS framework to allow us to focus on our value add!

thanks for any insight


VCP5 VSP5 VTSP5 vExpert http://vmadmin.info
0 Kudos
1 Reply


I've recently started using vCO. It is quite capable and extensible, and has many built-in vCenter tasks. It also has a nice plugin frameworks for developing custom solutions. One of the things that I really like about it is that it automatically generates the parameter form for a workflow, so you really don't need to spend time on UI creation unless you want something fancy. The actions and decisions that make up a workflow use javascript, so you can change their behaviour to suit you needs.

Workflows are exposed as Web Services, so if you need a higher level orchestration engine, you could go with something like Apache Service mix, assembly your business processes, and call vCO via the SOAP adapter.

A couple factors to consider when evaluating whether build or image:

  1. Time to build vs time to deploy image

  2. Amount of deploy time customization

  3. Rate of change of the contributing components

It can be significantly faster to deploy an image than to build the same result depending on how is going on during the build (compiles, package installation, etc.). On the other hand, it is sometimes difficult to automate a post-image customization, and if you have to do it, you may as well get the flexibility of building the components on top of a more broadly applicable base image. Finally, and I think most importantly, it comes down to how fast the things in your image are changing. Some things change really fast, they don't really fit well into an image. You'll spend all of your time respinning your images to keep up with a few fast changing components. You would be better off pulling them out and only put long cycle components in your image. Then you are also positioned to apply the fast cycle components to already deployed systems.

So, in short, I think there is a place for both types of deployment strategies for most systems. Image systems for base OS + some slow changing components that are somewhat ubiquitous in your environment, and a package deployment system that can keep up with the rest.

Hope it helps.


0 Kudos