Hope this is the right forum for this and thanks for your time.
I’m trying to figure out how to delay the boot of all VM's provisioned by a vRealize Automation blueprint, so that they all boot at the same time when they are all done being provisioned.
I know you can draw an arrow so one VM will wait for another to complete before it starts, but what I need is for all the VM's to be deployed, but not start until they are all provisioned.
The reason in this case is because we are deploying a bunch of domain controllers, and if only 3 of 5 are powered up, while the other 2 are still provisioning, it may cause issues with the DC's roles, as certain roles wouldn't be available on the network since the machines that are assigned to them aren't done provisioning/powering up yet. Would be best if they all waited after they were done provisioning, and then all power up at the same time.
Hope that made sense,
Thanks for your time.
I don't think trying to figure out a way to do what you want isn't the best way to go about it. I'd use a software component and take as a parameter the IPs of the other nodes being deployed and run in a check loop to ensure they're all available. Once they're all available, begin the software component that installs the AD role and other tools.
Hope I’m understanding what you're saying but..
in our case we are deploying clones of production DC's into microsegmented environments. So, they already have all their data and roles on them before they are even cloned.
I think you'd be better served by cloning from a generic Windows Server template and creating the identity via software components, config management, or some other automation so you aren't so dependent upon boot order. Because there are other factors that would impact how you have this working today even if they all booted at the same time: Is storage overloaded? Is network overloaded? Does DRS move things around and create impact? The point being even if you boot them all at the same time, you could still fail if they don't become ready, from an application standpoint, within a set window of time, and there's very little way to control that.
Ok, i appreciate your help and your time.
Problem is, in our case, this is part of our DR setup, as well as dev test and all of that and its all stood up already.
Its all been coded in Orchestrator for all the background tasks like netowrk creations, and uses vRealize Automation for some of the blueprint tasks as well as for the customer portal. It's pretty neat. It deploys clones of any and all prod machines into a microsegmentated NSX environment, automatically creates all needed networks, DLR's and edge services. Creates access through Horizons view into those environments and allows users to build, test, bug fix, and destroy on demand without contacting the IT Team. I just wanted to make sure that the DC's not coming up at the same time weren’t going to be a problem. Honestly, it might not even be a problem in the long run. Right now the last 2 DC's come out about 10 min after the last one finished. Might not cause any problems.. .. might cause problems. At this point I don’t know, but it seemed there would be some 'wait' command out there to delay the boot of a VM until another VM is done, but so far that doesn't seem to be the case. Just trying to avoid future issues.