VMware Cloud Community
OsburnM
Hot Shot
Hot Shot
Jump to solution

VIO 6 - VDS - DHCP Issue

Greetings-- New to VIO (doing a POC) and deployed a VIO6 environment in vSphere 6.7 with vDS (no NSX).  Have a dedicated /24 for our compute network and I'm struggling with IP addressing.  I have an external DNS/DHCP environment and intended on using it with DDNS but it's not a requirement.  I created my compute network and said no DHCP.  The VMs get DHCP from my external servers just fine, but VIO is still "assigning" what it thinks it should have out of that /24 and not honoring what my external DHCP is handing out.  So, I tried turning off my external DHCP server and selecting the option for DHCP Agents, which it creates them, but the compute instance VMs don't seem to be pulling addresses from them.

Any ideas?  Is this even possible in VIO6 with a vDS setup?

Thanks,

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
OsburnM
Hot Shot
Hot Shot
Jump to solution

Closing this thread-- all these issues with VIO 6 & DHCP are related to a busted deployment with the services passwords all expired in the base images.  VMware needs to fix before this product is even usable out of the box.

See the following thread:VIO 6 DHCP Agents Bug? with solution

View solution in original post

0 Kudos
4 Replies
zhenmei
VMware Employee
VMware Employee
Jump to solution

We do support VIO6 with a vDS setup, but I'm afraid there is no support with external dhcp server. Could you try to create a different vlan network with your external dhcp server, and boot an instance to see if instance can get ip address from neutron dhcp server.

0 Kudos
OsburnM
Hot Shot
Hot Shot
Jump to solution

Thankyou for the reply.  That's exactly what I attempted-- I have a dedicated VLAN presented to all the hosts.  I had Neutron create the Network and Subnet and selected Enable DHCP.  Confirmed the vDS Portgroup created and had the correct vlan tag.  Verified the compute VMs were assigned the correct portgroup; however, none of the compute nodes are receiving a DHCP address from the Neutron DHCP servers.

Any ideas how to troubleshoot this?

[EDIT]

It's also worth noting that I can manually assign an IP/netmask/gateway from an address in that subnet on that vlan, directly to a compute node-- and it is able to communicate out.  it just isn't pulling a DHCP address from the Neutron DHCP Agents.

[/EDIT]

0 Kudos
zhenmei
VMware Employee
VMware Employee
Jump to solution

Hi,

1. Check the  eth2 of controller node in which dhcp pod deployed connect to a trunk port group

2. Login the dhcp pod, check the dnsmasq process is running

  • osctl exec -it <neutron-dhcp-agent-default-XXX> bash
  • [root@controller-6q2sw2tnbv /]# ps aux|grep dns

nobody     159  0.0  0.0   6708   148 ?        S    03:08   0:05 dnsmasq --no-hosts --no-resolv --pid-file=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/host --addn-hosts=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/opts --dhcp-leasefile=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/leases --dhcp-match=set:ipxe,175 --dhcp-userclass=set:ipxe6,iPXE --local-service --bind-interfaces --dhcp-range=set:tag0,1.1.1.0,static,255.255.255.0,86400s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq.conf --domain=example.org

3. If all above steps work as expected, make sure the physical switch allow vlan id which you configured. You can try to migrate the VM to the same esxi host as the controller node, and see if this VM can get ip address from neutron dhcp, if it can, then it means neutron dhcp works.

0 Kudos
OsburnM
Hot Shot
Hot Shot
Jump to solution

Closing this thread-- all these issues with VIO 6 & DHCP are related to a busted deployment with the services passwords all expired in the base images.  VMware needs to fix before this product is even usable out of the box.

See the following thread:VIO 6 DHCP Agents Bug? with solution

0 Kudos