Nexus was working when the VSM was on a seporate ESXi host as the VEM/dSwitchs. When I manually installed the VEM same host as the VSM, the VSM lost connectivity to the physical network, and therfore the distributed switches.
The ESXi host the VSM is running on only has 2 NICs, one NIC for Nexus Control, Management, and Packet via the dSwitch and the other NIC connected to the vSwitch.
I think if I can get mgnt0 of the VSM to gain access to the vSwitch things will get back to normal. How do I assign this interface to the vSwitch?
I would create new control, management, packet port-groups on the normal vswitch if possible. Then move the VSM network connections to the vSwitch that should resolve network connectivity.
Once you have that working make sure you create control, management, and packet port-profiles for the VSM if you want to move it back. Make sure the system vlan is set in those port-profiles and on the uplink port-profile as well. Once those are created you can migrate the VSM to the VEM.
louis
I would create new control, management, packet port-groups on the normal vswitch if possible. Then move the VSM network connections to the vSwitch that should resolve network connectivity.
Once you have that working make sure you create control, management, and packet port-profiles for the VSM if you want to move it back. Make sure the system vlan is set in those port-profiles and on the uplink port-profile as well. Once those are created you can migrate the VSM to the VEM.
louis
Thank you for the suggestion, as it worked!
I use a single VLAN for Control, Packet, and Management, then 2 VLANs for for user traffic.
Because of only 2 GigE NICs I am now able to have both as my uplinks. Is this a good configuration as I only have the one uplink trunk for all the traffic?