VMware Cloud Community
jpattimor
Contributor
Contributor

Guests can only ping some other guests on different hosts but not all

I have deployed ESXi 6.5 on Cisco UCS hardware connected to Cisco 3850's that the default vlan 1 was used on for the gateway IP.  I also have up links to separate switches for iSCSi.  The hosts are able to communicate with each other and the rest of the network and the iSCSi network fine.  when I vmotion guests to the new UCS hardware however initially I could not communicate with them at all.  Then I modified the vswitch to tag vlan 1 and I could communicate from my workstation to the guests.  However I found that the guests could not communicate with all other guests on the various hosts but could communicate with some.  It did not matter if I was used e1000, e1000e or VMXNET3 nic drivers for the guests.  These are on standard vswitch's.  I'm completely stumped as to what is going on.  Can anyone provide some guidance.

thanks

Reply
0 Kudos
5 Replies
a_p_
Leadership
Leadership

To me this looks like a configuration issue on the physical switches. VLAN tagging on port groups is only required if the vmnics are connected to trunk (802.1Q) ports, and the VLAN is not the default VLAN. Please double check the settings on the physical switches, and make sure that all vmnics - which are connected to the same vSwitch - are configured identically.

André

Reply
0 Kudos
jpattimor
Contributor
Contributor

The physical switch ports are etherchannel LACP mode active with allowed vlans 1, 922.  Vlan 1 is the default.   The fiber Interconnects for the Cisco UCS hardware also has VLAN 1 as the default VLAN and assigned to all uplinks which with the disjointed network is part of the problem because it adds VLAN1 as allowed to the uplinks to the iSCSi switches.  Which is why I had to tag VLAN1 on the vswitch to get pinging to work at all.

Reply
0 Kudos
a_p_
Leadership
Leadership

To be sure that I fully understand the network configuration, please provide some more details about the configuration.

You mentioned "EtherChannel mode active". Are you using distributed virtual switches for the ESXi hosts? Standard vSwitches do not support this configuration.


André

Reply
0 Kudos
jpattimor
Contributor
Contributor

The cisco UCS Fabric FI's default to LACP for ether channels on the uplink ports so the upstream switches are configured that way so that the hardware layer will talk properly.  the Vswitch has to go through that before reaching the physical switch upstream.  Right now I am using the standard vswitch but if the distributed swith will solve my issue I can configure distributed switch.

Reply
0 Kudos
jpattimor
Contributor
Contributor

Seem's I may be the victim of a self-inflicted issue.  I had two vmnic's in active for the VM Network port group on vswitch 0.  however the vmnics were pinned to different FI's on the Cisco side.  Once I set the vmnics to active/standby my issues went away.

Reply
0 Kudos