I need some assistance / picking some brains on how to achieve 'full-on' network automation without the use of NSX, nor vSphere DHCP via vRA's network profiles, but via MS DHCP. I would have multiple VLANs where I can grab an addresses from and make use of reservations.
Here are a few questions / scenarios:
1) Can a build profile manage multiple hard-coded VLANs where it will auto-balance addressing? i.e once the VLAN fills up, it will move onto my next VLAN specified? Or what would be equivalent to this?
2) If one uses network profiles, can vSphere DHCP update the Microsoft DHCP server for which address was used and then reserve that IP and not make it available to anyone else?
3) How can I make 'intelligent' VLAN selection possible via the use of certain selection criteria? similar to a few posts to increment and make a unique hostname based on selection...i.e COMPANY/LOCATION/ENVIRONMENT. So if I choose COMPANY = Nike, LOCATION=EU, ENV=Dev, the name could be NEUDEVxx, and behind each selection, it will determine which VLANS are available to it for segmentation purposes. So if I choose Nike/EU/Dev, I have VLAN 1-10, if I choose Nike/EU/Test .... then i have VLAN 11-20 behind to choose from
Does this make sense? Any input would be appreciated
Consider this, you are going to use an external system for network selection so you are on the right track to create the logic. I have two different IP management systems in my environment and depending the environment and location determines which system to use. With that said, I have different workflows that make API calls to the IP management system to request the IP and DNS records and also query for the next available number for the base server name, You could actually create an action that will query the system for available IP's as part of the check or another process during the build to make the determination for you and apply the available values. There are a couple of ways to do this and your going to need to decide where in the process to do this
@Grant -> Current we use 6.2.1 and we seperate our switches per cluster, so all portgroups are unique per cluster which makes this tricky. Reason behind the seperate switches per cluster is the 'no eggs in one basket' approach. Should we look at consolidating the switches into a single one? And what risks do we carry along with it?
@sbeaver - Do you make use of Infoblox or MS IPAM? Currently the network is not decided as I have hardcoded a landing VLAN within the build profile. And essentially this would need to change once the cluster starts filling up...then move the compute resource to the next cluster with the corresponding portgroup for that cluster. It makes life slightly more difficult and thus my question to Grant re to perhaps consolidate the switches? But coming back on where the decision takes place...this is during the request phase - user will do similar selection of 'COMPANY/LOCATION/ENVIRONMENT'. The ideal would then be that based on the aforementioned, it would have 'x' amount of VLANS mapped to that criteria and basically consume them accordingly. It also needs to be intelligent enough to move onto the next VLAN should the subnet starts filling up.
I've not seen that approach before, and yes I would definitely consider consolidating onto a single DVS. The config of the DVS can be easily exported/imported in the event of any kind of issue or corruption, so definitely not a risk from that perspective.
I've done a (somewhat) similar request model whereby clusters in different datacenters had different portgroups/VLANs available. The first dropdown was "Location", which then exposed a network zone, which then allowed for selection of a VLAN within that zone.
The challenge to a certain extent is that the selections need to be something understandable by the requester. The hardest part here is that it's not dynamic, but XML driven - this means a pretty big overhead as you roll through cluster availability.
Thank you for the response! I'm kinda deviating from the initial thread now, but would it be possible to provide the pros and cons with having multiple distributed switches (one per cluster) vs a single consolidated switch per vCentre? What can go wrong and if so the impact etc?
Perhaps to your knowledge, do you know if 6.2.1 can achieve 'intelligent multiple VLAN consumption and IP distribution' without the use of vRA's IP distrubtion? Meaning via DHCP and once the scope is full, move onto the next specified VLAN for consumption.
It is kinda two-part at present...1) I would like to consume those DHCP VLANS (intelligently) and 2) as a second phase moving to vRA 7, have this done 'dynamically' based selection