MBouillon
Contributor
Contributor

ESXi 6.0 Management Network IP address issues

It has been our practice to set DHCP reservations rather than setting static IP's on the NICs assigned to the management network.  But that will change after today's issues.  I have two physical NICs assigned to the vSwitch that contains the Management vKernel adapters.  vmnic2 and vmnic4 are those adapters.  The MAC address for vmnic2 is what we used to set the DHCP reservation.  At 11:13am this morning, vmnic4 decided it wanted to grab the IP address, but since no reservation was made for that NIC, it grabbed a random IP address.

How does ESXi6 determine which NIC is to request an IP address, and why would it just randomly change?

I've asked VMware support on using reservations, and they said it was fine.  But regardless, each host will be configured for static in the near future.

Thanks!

Marty Bouillon

0 Kudos
2 Replies
vmrale
Expert
Expert

MBouillon​,

vmnic 2 and vmnic4 are just uplink interfaces connected to vSwitch. IP addresses can be assigned to vmk# interfaces.

If You want me to try explain to You what happened, please give me more details about your network configuration.

Regards
Radek

If you think your question have been answered correctly, please consider marking it as a solution or rewarding me with kudos.
0 Kudos
MBouillon
Contributor
Contributor

Thanks vmrale,

I have two 10GB NICs that are configured active(vmnic2)/active(vmnic4) on vSwitch0 (Standard Switch) that has two VMkernel adapters (Management - vmk0, vmnic2(active)/vmnic4(standby) and vMotion - vmk2, vmnic2(standby)/vmnic4(active)).

Since we prefer to use DHCP reservations rather than statically assigning IP addresses, a reservation in DHCP was created using the MAC address of vmnic2.  This host has been in production for 4 years, has been patched, had its firmware updated, rebooted many times.  Friday morning, the host was patched, the firmware updated and rebooted.  After the host rebooted, I took it out of maintenance mode and immediately the cluster balanced itself via DRS.  Approximatly 20 minutes or so after brining the host back on line, vmnic4 decides it rather grab an IP address and that is when the host lost connectivity to vCenter.  Fortunately the vm's were still accessible, but we had no way to vMotion them from the host.

There are some other goofy issues going on.  I did give the Management (vmk0) a static IP address and that got us back up so we could put the host in maintenance mode again.  But during the vMotioning, the host kept losing network connectivity.  The only way to get things functioning again was to stop/restart the management network.  Once restarted, we could manually vMotion 3 or 4 vms.  If we tried to vMotion any more than that, we would lose connectivity again and have to stop/restart the management network.

Very odd.  I have sent the hardware logs to the hardware vendor and have a ticket opened with VMware, so we'll see if they can find any issues..

Hope this gives you the information you are looking for.

Thanks!

Marty

0 Kudos