Good day guys,
I texted a few weeks ago about some external connectivity issue but did not get a response. please i will really appreciate any help i can get now as time ir running out on me for my thesis completion. Here is a brief summary of my overal setup:
1) I am using the cisco UCS as my hardware environment The ucs has 4 blade servers each with 2 physical nics with each going to a specific fabric interconnect switch and both fabric interconnects terminatre on a catalyst 3560 G switch which is BGP enabled and witha system mtu of 1600. The blade servers host:
- One server for the vcenter 6.7 appliance.1 vds is configured for nsx-t with a management portgroup, an overlay prtgroup and an edge- uplink port group which will provide external network connectivity to my catalyst 3560 switch with a vlan configured for this. All the vlans also have svis configured on the catalyst.
- One server housing the NSX-T version 2.3 manager, controller and edge VM. The NSX-t manager and controller are only configured to use the management portgroup which has only one uplink to the vmnic0. The edge uses both vmnics with vmnic0 configured as active for edge vlan uplink portgroup with vmnic one as standby, The e overlay portgroup usies the vmnic 1 as active and vmnic 0 as standby. Promiscous mode is enabled for this portgroup
- The last 2 blade servers are not connected to the vcenter vds, but have their 1st nic connected to the esxi standard vswitch. The remaining nic is used by NSX-T for the underlay. They have nsx-t uplink connected for fail-over order.
----- The edge was successfully able to connect and form a bgp neighbour with the catalyst switch via the edge uplink vlan. It was added to the overlay as well using the overlay portgroup. The edge was succesfully connecetd to the overlay and was able to ping the other esxi host teps. ALL TEPS are reachable.
I noticed that although all teps are reachable (2 hypervisor teps and the edge tep), vms in the hypervisor nodes in the overlay cannot ping across hypervisors. vms in the same hypervisor, connected to the same logical switch can ping each other. vms in different logical switch and same hypervisor host can also ping each other.
BUT; when they are in different hypervisors, they cannot ping each other even though they may be connected to the same logical switch. VMs in the overlay cannot ping beyond their gateways as well. They cannot ping the external network even though the Service router of the Tier-0 logical router has the VM networks`on its routing table. External networks can ping the gateway of the logical switches but cannot ping beyond that. I also noticed that when i attach a dhcp server to the VM (I am using ubuntu 16.04), they lease ip addresses for exactly 1 secon and its lease period ends. This is unusal as the lease period is set to the default 8640000s
I have tried all i know and have ran out of ideas. PLEASE HELP ANYONE. I will appreciate any form of help as it is vital for my studies completion Find attached my UCS vnic template screen shots for possible diagnosis
Screenshot (147).png 451.7 K