VMware Cloud Community
TryllZ
Expert
Expert

Free NIC port not getting assigned to a dvSwitch (An error occurred while communicating with the remote hosts)

Hi,

I have this setup in Workstation.

I had a vmnic attached to a vSwitch, it wouldn't migrate to a DV Switch, so I removed the vmnic from the vSwitch and now I'm trying to add it to the DV Switch, but it fails.

vServer-2019-09-22-16-36-49.png

vServer-2019-09-22-16-36-58.png

vServer-2019-09-22-16-37-38.png

vServer-2019-09-22-16-38-04.png

When I try to migrate the physical adapter to DV Switch i get this error.

vServer-2019-09-22-16-50-35.png

I am unable to find the issue, where could I be wrong.

Thank You

8 Replies
daphnissov
Immortal
Immortal

You're trying to add a vmnic to a vDS which has the management vmkernel port bound to it also. That vmnic probably isn't configured to allow communication from vCenter to this host, so when you add it and it breaks the communication, vCenter rolls the configuration back. Check this vmnic and ensure it has proper network access for your management vmkernel.

0 Kudos
TryllZ
Expert
Expert

Hi,

Thanks for the reply.

The port I'm trying to add to the DV Switch does not have any management vmkernel port bound to it, all management ports are in Uplinks 1 & 2 as can be seen here in this image :

https___communities.vmware.com_servlet_JiveServlet_showImage_2-2887624-394401_vServer-2019-09-22-16-38-04.png

What I had done was I transferred the vmnic port from one of the hosts and then attached a new vmkernel to it, gave it an IP and pinged it from the same host again to test connectivity, and it pinged. Then I moved all the hosts vmnics along with their vmkernels to the DV Switch, all vmnics migrated except the .21 hosts's vmnic.

Thank You

0 Kudos
daphnissov
Immortal
Immortal

It comes down to a network comm error as I said. The only way vCenter will roll the vDS configuration back is if it loses connectivity to the host. Since this is a nested lab in Workstation, check your access from that vmnic.

0 Kudos
RajeevVCP4
Expert
Expert

are you migrating host svs to dvs or just removing vmnic0 and adding it to DVS?

Rajeev Chauhan
VCIX-DCV6.5/VSAN/VXRAIL
Please mark help full or correct if my answer is use full for you
0 Kudos
TryllZ
Expert
Expert

Sorry for the delayed reply.

I'm migrating the network from regular vSwitch to dvSwitch.

0 Kudos
NathanosBlightc
Commander
Commander

Try the following procedure for better Migration from VSS to VDS:

1. Add the hosts to the VDS

2. Manage existing hosts and Migrate some of pNIC (vmnic) from VSS to VDS to act as the VDS Uplinks. I always try to set a redundancy pNIC in this layer for a safer and cleaner distributed vSwitch migration

3. If you have a VMKernel that its traffic will be transfer from this uplinks you should Migrate vmkernel port in this wizard in one time handling. But if you have more uplinks to handle management traffics:

      3.1 First migrate one of the uplinks and check its connectivity

      3.2 Then migrate vmkernel port and bind them to the dedicated dvPortGroup on the VDS

      3.3 If you are ensure you have full access to your host's management, migrate the 2nd Uplink to the VDS

      3.4 Remember if you lost host connectivity in the VDS migration operation, vCenter will fail back the ESXi VMKernel port to the old VSS and it's  related standard port group

4. Migrate all the remaining vNICs connections and their related physical uplinks to the VDS to successfully finish and confirm the Migration operation.

Please mark my comment as the Correct Answer if this solution resolved your problem
TryllZ
Expert
Expert

Hi Amin,

Thanks for the steps and the explanation, will attempt it and get back here.

0 Kudos
TryllZ
Expert
Expert

Hi,

So this is what I have attempted thus far over 2 separate deployments in workstation.

In the 1st case I migrated management vmnic0 and its associated vmkernel (this is the default network management vmkernel) and I received network connectivity failure warning, and this setup failed despite the fact I have redundancy with a manually created management vmkernel connected to vmnic1, in this setup the ESXi hosts lost connectivity and did not recover at all, neither were the vmnics migrated, and I could communicate with the hosts any further.

In the second setup I 1st migrated the vmnic1 and the associated manually created management vmkernel, and in this setup the migration was smooth without any warnings.

So my question is that why did the 1st method failed even though a redundant management network existed ?

Thank You

0 Kudos