VMware Cloud Community
BrettK1
Enthusiast
Enthusiast

Round Robin deployments, or a spreading a single deployment to multiple cloud zones?

In a template that allows for a large number of machines to be deployed at once, I'm wondering if there is a way to actually spread that individual deployment across cloud zones?

In this instance, there are 2 clusters, and we'll put an upper limit on the number of VMs from this Project.  Lets say 20 VMs on each cluster.  Currently, if we deploy 30 VMs, even though there are resources to hold 40 for the project, vRA will say 'sorry, there's no place that can hold 30 VMs'.  'Spread' apparently only looks for the least dense cloud zone, and tries to throw ALL of the deployment to that one location.  Is there a way to spread the deployment itself?

Just doing 2 deployments of half each is a viable workaround in this instance, but would be best if that can be avoided.

0 Kudos
4 Replies
emacintosh
Hot Shot
Hot Shot

I have no idea how accurate this is, but according to spas's video if items are grouped together in a cloud template (hard-linked i guess), then vRA will try to deploy them together.  If that is the case, maybe you could play with somehow separating them into different groups to see if that works?  Not sure how that would affect your overall design though

Youtube: Zero2Hero: Constraint Tags & Placement (Cloud Assembly) - Grouping Chapter 

0 Kudos
BrettK1
Enthusiast
Enthusiast

I've watched a bit of that video before, will likely need to watch it again as I get more familiar with vRA.
As this is a single VM and Network, and deploying X instances of it, looks like it considered all the 'resource instances' as being linked as such and only given consideration for placement once.   Hopefully in an upcoming release we'll get the ability to have it check for each instance if we desire.

0 Kudos
emacintosh
Hot Shot
Hot Shot

Hmm, so are you specifying a count in the properties of your vm in the cloud template?  Or specifying the number of deployments you want at submit time?

 

If it's the former, then I'm guessing they'll all be placed together.  And in that case, maybe add another VM/Network pair to the cloud template.  And then break up the instances between them?  I'm not sure if/how that would affect any extensibility your doing for those builds, but could at least see if you get past the placement issue.

 

But if it's the latter, each deployment should not know about the others and should be placed independently.  That seems to be our experience at least.  To clarify, we have a single vm in our template.  We release our template the catalog.  We set the max instances (SB -> Content&Policies -> Content -> <template> -> Configure Item).  However, there seems to be a hard limit of 10 for some reason.  But our use case is to allow our users to build multiple, similar-but-separate servers at once, which sounds different than what you're after?

BrettK1
Enthusiast
Enthusiast

The former, through user input.  It's a large number of VMs that will only be around for about two months (for a class, so we push the deployment ourselves ahead of the class), so basically a one click (well, a couple clicks, and typing the number of VMs to deploy), and one click to delete them all when they're done.
At least for THESE, there is zero extensibility attached, so that makes this case super simple, and since there's no need to worry about ever deploying an 'odd' number of VMs, I certainly could try tossing in another VM/Network pair (and just remember to deploy 'half' the number I would otherwise!)

Edit:  So I gave this a try, and it'll basically sort of do what we want.  The 'spread' deployment checks for which cluster is the 'least dense' (by number of VMs, not by consumed resources) and will choose that as the target location.  As our clusters are pretty close, this method will 'fill' the first cluster until they are at the same density, and then alternate placements after that.  I will just need to change the project limits for each cloud zone from the 'actual indented usage' to account for the fact these VMs won't actually be evenly distributed across clusters.

Kudos for a 'kind of workaround', though hoping for an eventual 'correct answer' (which may need to come in the form of 'new features' in the future).

0 Kudos