We do support VIO6 with a vDS setup, but I'm afraid there is no support with external dhcp server. Could you try to create a different vlan network with your external dhcp server, and boot an instance to see if instance can get ip address from neutron dhcp server.
Thankyou for the reply. That's exactly what I attempted-- I have a dedicated VLAN presented to all the hosts. I had Neutron create the Network and Subnet and selected Enable DHCP. Confirmed the vDS Portgroup created and had the correct vlan tag. Verified the compute VMs were assigned the correct portgroup; however, none of the compute nodes are receiving a DHCP address from the Neutron DHCP servers.
Any ideas how to troubleshoot this?
It's also worth noting that I can manually assign an IP/netmask/gateway from an address in that subnet on that vlan, directly to a compute node-- and it is able to communicate out. it just isn't pulling a DHCP address from the Neutron DHCP Agents.
1. Check the eth2 of controller node in which dhcp pod deployed connect to a trunk port group
2. Login the dhcp pod, check the dnsmasq process is running
- osctl exec -it <neutron-dhcp-agent-default-XXX> bash
- [root@controller-6q2sw2tnbv /]# ps aux|grep dns
nobody 159 0.0 0.0 6708 148 ? S 03:08 0:05 dnsmasq --no-hosts --no-resolv --pid-file=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/host --addn-hosts=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/opts --dhcp-leasefile=/var/lib/neutron/dhcp/fa5747cf-2911-4fc2-9c1c-c7b359a12a5e/leases --dhcp-match=set:ipxe,175 --dhcp-userclass=set:ipxe6,iPXE --local-service --bind-interfaces --dhcp-range=set:tag0,220.127.116.11,static,255.255.255.0,86400s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq.conf --domain=example.org
3. If all above steps work as expected, make sure the physical switch allow vlan id which you configured. You can try to migrate the VM to the same esxi host as the controller node, and see if this VM can get ip address from neutron dhcp, if it can, then it means neutron dhcp works.
Closing this thread-- all these issues with VIO 6 & DHCP are related to a busted deployment with the services passwords all expired in the base images. VMware needs to fix before this product is even usable out of the box.
See the following thread:VIO 6 DHCP Agents Bug? with solution