So I thought I would take a look at OpenStack in my lab prior to any kind of POC at work. I have however failed at the first hurdle miserably . After having deployed the OVM I decided to go through the configuration wizard however at the "Configure management networking" it fails during the network verification with "Failed to validate the IP address" and I have no clue why because I don't actually know specifics of what its trying to verify exactly?
Is it expecting to find a DHCP server on that network range? I have tried all sorts of ranges and combinations without success
If its expecting a DHCP server on a segregated network I suppose I can spin up a PFsense box or similar, or have a I completely miss-understood how it works and does it do all the networking for me?
Sorry to hear about the network range error. I have a few questions for you...
1. How are you segregating the networks? Do you have actual VLANs in your home lab?
2. Is the VIO management server attached to the same DVS portgroup as what you specify for the Management Network Setting (VM Network dvs)?
3. From your desktop, can you communicate with the IP range allocated to the OpenStack API Access Network setting?
----These are the network addresses that end users will use to access the OpenStack services & dashboard
4. Which version of VIO is this?
5. Will you be using NSX or DVS networks?
I am attaching a diagram showing a high-level look at the required VIO network configuration.
If you are using DVS networks instead of NSX, you do not need the Transport Layer VLAN.
So, to summarize, if you are using NSX, 4 VLANs are required. If you are not using NSX, 3 VLANs are required.
The VMs on the OpenStack management network need to be able to communicate with vCenter, NSX Manager (if applicable), and with the VIO Management Server.
Is this thread still active? Did you get your lab running?
I suspect the check is failing with vCenter name resolution against the DNS, and or the OpenStack Management Server (oms) name resolution as well. Check the DNS, that is a place where this could be going wrong based on the limited information we have.
Additionally is the VIO port group on a DVS? If by happens stance it is on a VSS that might fail the check.
Come to think of it, being a home lab and all. How much resources does vCenter have? What is the load on it? Might not be getting a response in time for the deployment. For example if the vCenter was one of many workstation VMs on a typical performance laptop that is trying to run the whole lab including virtual ESXi. I would not expect that to work on the first try.
Regarding if it is expecting a DHCP server. I have deployed several VIO environments and did not use DHCP in a single one for the deployment, so I do think that is going to trouble you. Although not having a gateway on the API range would mean your jump box would have to be on the same network range as the load balancers will not route on the public API access range without a gateway.
If it is not a vCenter performance issue due to lab limitations, I would bet money that it is the network configuration. Both the OMS and the vCenter need to ping each other and resolve short and long names for the other by lookups to the DNS. Updating the hosts file would not help if the check directly queries the provided DNS forcing past hosts files. Which I would be surprised if it doesn't, wouldn't be a proper check of the provided DNS IP otherwise.
I have a similar problem in that the deployment didn't provision the management clusters VMs w/ IP addresses. In my case, I'm running NSX, but the management cluster is attached to VDS for its management IP addressing. Looking at the exported build configuration file, the IP range is good, so I don't know why it didn't statically install this. Can I simply manually assign the IP addresses to fix this or do I have to destroy/redeploy the deployment?
Yes, the VMs were set on management network, but didn't get IP addresses allocated. I originally put them on dedicated NSX based vxlan for management and got this symptom. Turns out this isn't supported and when I moved it to VDS, it corrected the problem. It shouldn't matter to meet L2 adjacent requirements, but it does at the moment. Anyway, I have IP addresses now on my VIO management cluster as long as its on VDS, and not on virtual wire switch from NSX.
I tried NSX virtualwire portgroup, and the VMs can get ip addresses but since the management server VM is usually on a VDS portgroup or SS network which is not connected to these virtualwire networks, so the management server cannot reach and configure these openstack service VMs, so the deployment also fails.