Skip navigation

Cloud

11 posts

Have you ever needed more control over what custom properties get assigned to specific component machines of a multi-machine blueprint, or want to use the same component blueprints for all component machine of a multi-machine blueprint?  The Ultimate Multi-Machine Blueprint Extension aims to help with that.

The Ultimate Multi-Machine Blueprint Extension allows you to utilize the same source component blueprint for multiple component machines while at the same time controlling which custom propertied get assigned to each of the components.  This allows you customize each of them differently during deployment.

This extension works well with the Custom Hostname and the Custom vCenter Folders extension to round out the use of Multi-Machine Blueprints.

Example Use Cases:

  1. Use a single machine blueprint for all components of a multi-tiered multi-machine blueprint and customize the name of each component.
  2. Use a single machine blueprint for all components of a multi-tiered multi-machine blueprint and customize the guest agent actions of each component machine.
  3. Use a single machine blueprint for all components of a multi-tiered multi-machine blueprint and override the template for each component to deploy from a different source vCenter template for each component.

The goal of this extension is to limit blueprint sprawl and leverage the multi-machine construct to customize the component machines and rely less on customizing the single machine blueprints making them more re-usable.

This extension was designed and built as a collective effort by Tom Bonanno and Sid Smith.  If you have any feedback please let us know.

 

[More]

CloudUtil is a vRA(vCAC) repository management tool that is part of the vRA Designer.  It actually is what you are launching when you run the designer.  When run without parameters it launches the GUI Designer.  It however has other functions that can prove useful from time to time.

For starters if you don’t have the Designer Installed you can get it by going to https://FQDNofvCACAppliance:5480 –> IaaS Install –> vCloud Automation Center Designer.  When you install it make sure you put in the IaaS host, NOT the vCAC appliance hostname.

I frequently get asked how can workflow revisions be removed from the designer.  The answer is they can, but you need a Development Kit license to do so with CloudUtil.  Working in the designer you will come to find out that the revisions add up fast and before you know it you could have hundreds.  I’m going to walk you through a way to remove the revisions without a Development License for CloudUtil.

 

[Read More]

vCAC by default will place all provisioned machines into a vCenter folder named VRM.  You can override this using the custom property VMware.VirtualCenter.Folder to tell vCAC where to place the provisioned machine.  While this is great that you can tell vCAC where to place the provisioned machine it isn’t very flexible.  I built the Custom vCenter Folder Extension to fix that and make folder placement as flexible as you need it to be.  VM folder placement is just about organizing virtual machines.  It provides a way to control access to these machines through vCenter as well.  Many organizations control permissions to these environments using these folders and need to be able to place any machine where they need for these purposes.

Multi-Machine blueprints is another area where this extension adds value.  You can control placement of virtual machines by defining the VMware.VirtualCenter.Folder property on a Multi-Machine blueprint folder, but all VM’s for all Multi-Machine apps are placed in the same folder creating confusion as to which VM’s belong to which Multi-Machine application.  Now if you add NSX into the mix and you have Multi-Machine components spread all over the place with no way to easily determine which VM’s as well as NSX Edges go to which application.

When used with Multi-Machine blueprints the Custom vCEnter Folder Extension can place all component Virtual Machines as well as Deployed NSX Edge appliances in a folder named after the Multi-Machine application if you desire making it easy to identify related components of an application.  This also allows you to easily permission vCenter access to the components of the application if necessary.

Features

  • Dynamic Folder Names based on custom naming scheme
  • Multi-Machine folder placement including NSX Edge applince
  • Automatic Multi-Machine folder removal when Multi-Machine app is destroyed

 

[Read More]

What is the cloud client you ask?  The CloudClient is a verb based command-line utility aimed at simplifying interactions with multiple product api’s.  The CloudClient also provides common security, exception handling, json, & CVS support.  Currently vRealize Automation (vRA)( Formerly vCAC), Site Recovery Manager (SRM), & vRealize Orchestrator (vRO)(Formerly vCO).

Getting Started with vRealize Cloud Client 3.0

  1. First you need to get the Cloud Client which you can download here.

[Read More]

A common extension requested for vCloud Automation Center is the ability to pre-create computer account objects in Active Directory in a specific Organizational Unit, and also to decommission the accounts in different ways along with the virtual machine. Without a custom workflow, you can have the computer join the domain during the customization phase, but this will only create the computer account in the default Computers container. Also, while there is an out-of-the box AD machine cleanup plugin which can be enabled, it will likely never support the multi-tenancy introduced in vCAC 6.0. vCO does not support it today either, but it is more likely to gain support in the near future.

This solution implements these functions using vCenter Orchestrator and its plugins for vCAC and Active Directory.

The rest of this article contains instructions on installing and configuring the vCAC AD Computer Account Management Extension. This extension allows administrators to model very specific OU structures for their AD machine accounts using vCAC custom properties, and supports dynamic OU Distinguished Name building based on combinations of properties derived from different areas of vCAC (compute resources, reservations, groups, blueprints, etc.).

This extension is proof-of-concept or demo grade. While it runs well and consistently, it has not been put through a formal quality assurance process, so please use with caution. Please see the disclaimer and other information in the readme.txt file in the package.

 

[Read More]

One of the most frequent asks when using vCAC is, “How do I deploy machines using my company’s hostnaming standards automatically using vCAC?”  Since the out-of-the box hostnaming only provides a way to do prefix-suffix, the answer to this question usually is that it will require customization.

This solution is intended to provide a way to implement this functionality by using a small, highly versatile custom extension which can handle 95% of use cases without writing custom code.

The rest of this article contains instructions on installing and configuring the vCAC Custom Hostnaming Extension.  This extension allows administrators to model very specific custom hostnaming schemes for their vCAC virtual machines, multi-machine services, and vCloud Director vApps using vCAC custom properties, with dynamic creation of stock machine prefixes and index tracking for each unique hostname combination.

This extension is proof-of-concept or demo grade.  While it runs well and consistently, it has not been put through a formal quality assurance process, so please use with caution.  Please see the disclaimer and other information in the readme.txt file in the package.

 

[Read More]

In this walk-through we will be deploying a logical router and configuring routing between (2) logical networks that we created in an earlier post. Logical routers consist of two components.  A virtual appliance that is deployed into your vSphere environment.  In the MoaC lab all routers are deployed to our management cluster and the vSphere Kernel module.  Remember the host preparations we performed as part of the NSX installation?  That was installing the NSX kernel modules.

The NSX Logical Routers Perform East-West (VM-VM) routing as well and North-South Routing.  The East-West routing performed by the Logical Routers afford you some extra efficiencies by allowing VM-VM communications across different subnets to happen at the vSphere Kernel when those vm’s reside on the same host.  You can also gain efficiencies when communicating between vm’s on different hosts as well.  Traffic for the communications will traverse host to host instead of needing to go out to a physical router on the network and then to the other vm.  In the post you will witness this as we place a virtual machine on each of the logical switches we created and the Logical Router performs routing between the two networks right in the hosts kernel. Although this specific post focuses on the East-West routing within the Logical Router we will be covering the North-South routing configuration in another post.

 

[Ream More]

If you are familiar with “Network Scopes” from vCNS then “Transport Zones” should be familiar to you.  If not here is some useful information to know regarding these Zones.

Transport Zones dictate which clusters can participate in the use of a particular network.  Prior to creating your transport zones some thought should go into your network layout and what you want to be available to each cluster.  Below are some different scenarios for transport zones.

 

In the “MoaC” environment I have three clusters.  There is a Management Cluster in which all management servers are hosted included all components of NSX which will include all Logical and Edge routers that we have not yet configured, but this concept is important to know.  I will not be placing any routers in any other cluster than my management cluster.  I then have a Services cluster which will be hosting all of my provisioned machines that are not part of the core infrastructure, and finally I have a desktop cluster in which I will be hosting VDI desktop instances.

 

[Read More]

I know your excited to get right down to the meat of the installtion, but there is some housekeeping that we need to get out of the way first.  There are a number of pre-requisites that we need to ensure exist in the environment first.

 

Pre-requisites

  1. A properly configured vCenter Server with at least one cluster. (Ideally (2) clusters – (1) Management Cluster & 1(1) Cluster for everything else.)
  2. Cluster should have at least (2) hosts. (More would be better.  Memory will be important)
  3. You will need to be using Distributed Virtual Switches (DvSwitch) NOT Standard vSwitches.
  4. If you are NOT running vSphere 5.5 you will need to have your physical switches configured for Multicast. (Unicast requires vSphere 5.5)
  5. You will need a vLAN on your physical network that you can utilize for VXLAN.

To give you an idea below is the configuration for the “MoaC” lab that I will be working in.

MoaC Configuration

  • vCenter 5.5 U2b
  • (3) Clusters
    • Management Cluster with (2) vSphere  ESXi 5.5 U2 Hosts
      • 32GB Memory
      • Cluster only DvSwitch using NIC Teaming
    • Services Cluster with (4) vSphere ESXi 5.5 U2 Hosts
      • 196GB Memory
      • Cluster only DvSwitch using LAG.
    • Desktop Cluster with (2) vSphere ESXi 5.5 U2 Hosts
      • 112GB Memory
      • Cluster only DvSwitch using LAG.
  • Physical vLAN trunked to all vSphere hosts in all clusters.


[Ream More]

vCAC 5.2 was officially released yesterday and made available publicly on the VMware website located here. Although it’s available on customers that have licenses for the product can access the download. Currently there is no public trial available.

New features in vCAC 5.2

  • Enhanced vCloud Director Integration – Support for Pay as you go, Reservations of partial oVDC’s, Individual management of VM’s within a vApp, and management of existing vApps.
  • Support for KVM – KVM support is adopted through the use of RedHat Enterprise Virtualization Manager 3.1 and supports provisioning of machines and management capabilities for the provisioned managed VM’s.
  • vCloud Networking and Security (vCNS) – Supports provisioning of machines into existing VXLAN’s, Security Groups, as well as load balancers.
  • Customizable Reclamation Workflows – This is an enhancement to vCAC’s reclamation workflows which were previously very static and not customizable. In this release you now have the ability to customize a new lease length and the wait time before enforcing the new lease period.
  • SRM Compatibility – Notice the word compatibility. vCAC will not discover both the primary and recovery VM and allow management of only the primary. So no real functional support for SRM, but it is at least now compatible and able to function in SRM environment.
  • Windows 2012 Managed Guest OS – vCAC 5.2 now offers support for Windows 2012 as a guest operating system.
  • Lot’s of bug fixes – If you read the release notes located here, you will see there are about 5 pages of resolved issues.

 

[Read more...]

During a POC something was brought to my attention that I haven’t heard anyone ask for before, but it seems like a very useful and valid need. The ask was to be able to set CPU Cores during provisioning rather than CPU’s. Operating Systems and other apps license by sockets, not cores so instead of having 8 CPU Sockets with 1 core, why not have 1 CPU Socket with 8 Cores. So I decided to build a solution that would solve this and change the CPU Sockets to Cores.

Now I prefer to do as much as I can in the design center and with the WorkFlow stubs because then they will work for everyone without the need for the CDK so taking that into consideration here is what I have built.

Background Info:

I am executing my script at the MachineProvisioned state of the virtual machines lifecycle. This can mean different things based on the provisioning type that is selected. If we are talking about cloning then it means that the clone has finished, vCAC hardware customization has taken place, Customization Specification has executed, any operation performed by the guest agent are complete and the VM for all intents is complete.Using the WFStubMachineProvisioned workflow however I can perform additional operations before the machine is handed off to the owner. In this case I’m using the workflow stub to execute a powershell script named SocketsToCores.

 

[Read more....]