Hello,
I use NSX-T in my home lab, and it works fine, and now we are trying it at work, but the problem is that none of the ESXi hosts, and the edge node has a configured TEP. I can see that the hosts pNICs are attached to the overlay N-VDS, but, no IP addresses were assigned from the IP pools I created, and no TEPs are listed inder all transport nodes (as shown below_. and the funny thing is that the status of all nodes (including the status of "Tunnel") are healthy and green, but no actual TEP interfaces are there.
NSX-T 2.4.2
ESXi 6.5
vCenter server 6.7U1
Everything seems to be properly configured to me, so please advise.
Thank you,
One more thing. I can see that few IP addresses are allocated from the IP pools, but from withing one of the nodes (the edge) I can't ping the IP that is assigned to it, no actal TEP interface is there.
You're not going to see anything there if there aren't VMs that have interfaces on that N-VDS. If you want to verify your ESXi hosts have IPs assigned to their TEPs, you'll need to do it from a CLI session on the host and look at vmk10.
Pardon me but this has nothing to do with my issue, it should be listing each TEP ip, and the reachability to other TEP addresses. I've added a VM to an overlay segment anyway, and it's unable to reach the gateway IP of the segment.
My mistake, I was interpreting that screenshot as from another place in the UI. Do you have a vmk10 kernel port assigned? Does it pull an IP from the TEP pool? Some more information from your side is needed.
No worries.
Yes I have VMK10, but no gateway is assigned, I'm certainly sure that a gateway for TEP is defined, so, is this correct and healthy here under?
You won't necessarily have a gateway, nor do you need one if all your TEPs are on the same L2. You may have an MTU issue on your segment. Also, vmk10 is provisioned on the vxlan netstack and not the default one. This is displayed in a esxcfg-vmknic -l command.
some things that helped me:
test geneve tunnel pings from the vxlan stack
[root@esx004:~] esxcli network ip interface ipv4 get
Name IPv4 Address IPv4 Netmask IPv4 Broadcast Address Type Gateway DHCP DNS
----- -------------- --------------- --------------- ------------ ------- --------
vmk0 192.168.91.202 255.255.255.224 192.168.91.223 STATIC 0.0.0.0 false
vmk10 192.168.65.70 255.255.255.192 192.168.65.127 STATIC 0.0.0.0 false
vmk50 169.254.1.1 255.255.0.0 169.254.255.255 STATIC 0.0.0.0 false
[root@esx004:~] ping 192.168.65.65
PING 192.168.65.65 (192.168.65.65): 56 data bytes
--- 192.168.65.65 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss
[root@esx004:~] vmkping ++netstack=vxlan 192.168.65.65
PING 192.168.65.65 (192.168.65.65): 56 data bytes
64 bytes from 192.168.65.65: icmp_seq=0 ttl=64 time=0.735 ms
64 bytes from 192.168.65.65: icmp_seq=1 ttl=64 time=0.825 ms
--- 192.168.65.65 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.735/0.780/0.825 ms
Also my geneve tunnel pnics are trunked and I used a Transport VLAN tag in the uplink profile
What would you recommend me to do please? open a support ticket maybe?
I can't even perform loopback ping on VMK10 and 11 on the same server
Your vmkping command is not sufficient to effectively prove there is good communication between the TEPs because it doesn't factor in the required MTU overhead. Use this instead:
vmkping -S vxlan <TEP> -d -s 1572 -c 10
I think I have everything running fine, I could ping TEP only by specifying the stack to ping on using this command in the screen shot.
I\m also able to ping TEP of other hosts.
Though, I can ping normally in lab without the need of this command, but maybe the environment here is different form my home lab.
Thanks anyway,
Look at my above reply. You're not doing the vmkping correctly for a TEP.
Okay, I will apply it and let you know.
Here\s the result, MTU is fine I guess, it was 9000, and then I changed it to 1600.
That's a successful ping, so if you're still not seeing a tunnel come up in NSX-T Manager, see what the logs say. Otherwise I'd hit up GSS.
If your TEPs can ping each other with the required MTU and you attach VMs to overlay segments on each host you should see the tunnels. Do you confirm you have at least one VM on an overlay segment on each host?