iforbes's Posts

@dtokarev Thanks!
Thanks! It was the fact I hadn't deployed an overlay TZ.
Technically, anything over 1500 MTU (regular Ethernet) is Jumbo . Thanks again.
So, the Edge VTEP uplink needs to be configured with JFs? I'll give that a try.
Thanks for the quick reply. I figured it might be something like that. I have another question (just to ensure I've configured things correctly). My setup follows this topology: Now, I've ... See more...
Thanks for the quick reply. I figured it might be something like that. I have another question (just to ensure I've configured things correctly). My setup follows this topology: Now, I've deployed my Edge appliances to hang off the VDS like the above. In this specific architecture, the Edge VTEP needs to be in the same vlan as the Host Transport node VTEPs (which I've done). While I can successfully ping between my Host Transport node VTEPs, I cannot ping from the Edge VTEP to a Host Transport node VTEP. Looking at things more closely now is this because I need to ensure the physical switch port (P0 in the pic above), the VDS and the Edge Transport-PG (vNIC2 in pic above - hosting the Edge VTEP) all need to be configured with Jumbo Frames (a minimum 1600 MTU)? Presently, I don't have JFs configured on my VDS uplinks or port groups. Could that also be causing issues for N-S traffic (along with the need to ensure my logical networks are configured on my upstream router with a static route)?
Since static routes are being discussed, I have a question on setup. I'm also running a lab environment. My physical world gateway/firewall/internet router is 10.20.8.1 (NAT example). My NSX-T la... See more...
Since static routes are being discussed, I have a question on setup. I'm also running a lab environment. My physical world gateway/firewall/internet router is 10.20.8.1 (NAT example). My NSX-T lab transport nodes all have 4 nics. Two nics are left on vDS and two are dedicated for NSX-T. My ESXi servers and vCenter hang off the vDS and use 10.20.8.1 as their default gateway for internet connectivity. I've successfully configured Tier-1 edge (for logical networks) and Tier-0 edge for N-S routing. I also don't want to configure BGP and would prefer to just configure static routes between my Tier-0 and physical world gateway/firewall/internet router. I've configured a static route on the Tier-0 for both 0.0.0.0/0 (internet) and 10.20.8.0/22 (physical network). I configured a next hop address of 10.20.8.1 (is that correct?) using the uplink interface I configured (hanging off my vlan backed transport zone - IP: 10.20.8.253). When I run a get-routes on the Tier-0 I can see all of the NSX-T logical networks I created (so I know Tier-1 is successfully advertising it's routes to Tier-0). From Tier-0 CLI I can ping 10.20.8.1 (although I'm not sure that's because Tier-0 mgmt interface is on 10.20.8.0/22?) When I jump on a vm configured with a NSX-T logical network, I'd think I can ping physical network IP's now (i.e. IPs in 10.20.8.0/22) - but I can't. Is there anything missing from my static route setup? Do I have to also configure static routes on my physical world gateway/firewall/internet router for the NSX-T logical networks? In the end I'd like my vm's with logical networks to be able to communicate with my physical network and also get internet access - all via static routes. Thanks
Hi Bayu. Yes, I have bridging deployed, and dual control vm's in active/passive. I'll open a case, but how did you resolve? Is there an easy way to destroy the passive DLR node?
So, it's still happening but at least I've narrowed it down. It's 100% the DLR control vm that for some reason causes the interfaces I've dedicated for VXLAN/VTEP to become DOWN. In my setup I ha... See more...
So, it's still happening but at least I've narrowed it down. It's 100% the DLR control vm that for some reason causes the interfaces I've dedicated for VXLAN/VTEP to become DOWN. In my setup I have a dedicated vDS with 2 physical uplinks dedicated to VTEP/VXLAN traffic. The 2 uplinks are in active / standby nic team (use explicit failover order). Something is happening when this DLR vm resides on an ESXi server. After a period of time, the ESXi server will lose network redundancy because at least on of the 2 uplinks will be marked as DOWN. After so more time the other interface also gets marked DOWN and then it's network connectivity lost since both interfaces are down. Could it be some sort of traffic coming from this vm is flooding the physical interface causing the switch port to get marked as down? When I reboot the ESXi server, the interfaces come back. If I migrate the vm to another ESXi server, after a period of time the exact same thing happens. Is there a way I can figure out why this is happening?
So, it definitely had something to do with on the NSX side. In testing multi-tenancy I had created an additional DLR and ESG. When I deleted those from the environment, everything is stable again... See more...
So, it definitely had something to do with on the NSX side. In testing multi-tenancy I had created an additional DLR and ESG. When I deleted those from the environment, everything is stable again. No idea why, and a bit concerning that additional instances of those would cause issues, but things are back to being stable again.
Hi. My VXLAN interfaces use the same physical uplinks as the VTEP interfaces. They are 2 dedicated physical uplinks in a active standby nic team. Yes, I do have OSPF configured between my DLR and... See more...
Hi. My VXLAN interfaces use the same physical uplinks as the VTEP interfaces. They are 2 dedicated physical uplinks in a active standby nic team. Yes, I do have OSPF configured between my DLR and ESG, and from the ESG to the physical core. I don't yet have OSPF configured on the core yet. Yes vPC is configured between FI and core.
Running 6.3.0.5007049. It's deployed in a lab so not affecting production. Big enough issue to present a roadblock to production deployment though.
Hi. This is an odd issue. I've been noticing lately that the ESXi server that hosts the DLR and/or ESG control vm's will intermittently have only it's VXLAN interfaces disconnected. No other inte... See more...
Hi. This is an odd issue. I've been noticing lately that the ESXi server that hosts the DLR and/or ESG control vm's will intermittently have only it's VXLAN interfaces disconnected. No other interfaces on the ESXi server are affected. If the ESXi server doesn't house those control vm's, no issues. I can't figure out what is causing this unusual behaviour, but it's not good as this causes a bunch of issues. Since it's not an ESXi failure (just specific network interfaces going down) HA doesn't kick in to migrate those vm's to another host. So, I end up having vm's on the affected host just sit there until I'm alerted (i.e. network interface redundancy lost) and then I vMotion vm's away from the host. A reboot of the affected ESXi host resolves the problem and the interfaces are magically back up. My servers are Cisco UCS blades, and all interfaces are created as vnics in USCM and presented to ESXi as vmnics. As mentioned, no other vmnics on the ESXi host are affected.
Hi. I was wondering about how to go about ensuring my VXLAN vm's are able to be part of my non-VXLAN AD domain. You mentioned you have a physical DC and a vm DC. Is the vm DC installed in VXLAN n... See more...
Hi. I was wondering about how to go about ensuring my VXLAN vm's are able to be part of my non-VXLAN AD domain. You mentioned you have a physical DC and a vm DC. Is the vm DC installed in VXLAN network and the L2 Bridge facilitates the L2 connection between the two DC's? I guess my question is how are people handling infrastructure related services like AD/DNS that normally have resided in VLAN backed networks, to now communicate with VXLAN networks? L2 Bridging or just ESG L3 routing (OSPF, BGP)?
Ok. I have more information. The NSX documentation specifically states the following about the SSL VPN-Plus Private Network config: Type the port numbers that you want to open for the remote u... See more...
Ok. I have more information. The NSX documentation specifically states the following about the SSL VPN-Plus Private Network config: Type the port numbers that you want to open for the remote user to access the corporate internal servers/machines like 3389 for RDP, 20/21 for FTP, and 80 for http. If you want to give unrestricted access to the user, you can leave the Ports field blank. So, I left the ports section blank as I wanted to allow unrestricted access. As soon as I entered a port (RDP, 3389) and tried to connect to a vm over RDP it works. I still cannot ping it or ssh to it (or anything else other than RDP). It seems that unless I specify the ports I want open, it won't work. For a little more investigation I went to the Flow Monitoring section and selected Live Flow to capture what was happening to the vm I was trying to connect to. When I RDP'd from my laptop (connected via ssl-vpn) and successfully connected, the flow stated a source IP that represented the Edge Gateway and Destination of the target vm IP. This is as expected as the vpn tunnel is through the Edge Gateway. I then initiated a ping from my laptop (connected via ssl-vpn). The Active Flow showed an ICMP packet but the source IP is the ssl-vpn client virtual IP of my laptop (not the Edge Gateway), source port is 0, destination IP is the vm, destination port is 0 and the state is blank (see attached pic). So, not sure why the RDP which I defined as an acceptable port in the SSL-VPN Private Networks section goes through successfully and looks like it's sourced from the Edge Gateway, while a ping looks like it's coming from my laptop ssl-vpn IP with no source or destination port. Why am I seeing different results for different traffic?
It's a pretty basic setup. I've attached a quick logical. Yes, when I checked the client routing table, the private vxlan networks are advertised. I'm able to ping the vpn default (as defined in ... See more...
It's a pretty basic setup. I've attached a quick logical. Yes, when I checked the client routing table, the private vxlan networks are advertised. I'm able to ping the vpn default (as defined in the ssl vpn-plus IP Pool settings).
I've configured the SSL VPN. I'm able to connect externally, install the ssl client, authenticate and connect. Problem is that although I've defined the vxlan networks in the SSL VPN-Plus Private... See more...
I've configured the SSL VPN. I'm able to connect externally, install the ssl client, authenticate and connect. Problem is that although I've defined the vxlan networks in the SSL VPN-Plus Private Networks section, I'm not able to connect in any way to any vm in those networks. I've made sure all firewalls are disabled and still no go. I'm able to successfully ping/tracert to the ssl-vpn default gateway, but nothing is reachable past the default gateway. Not sure what I'm missing.
The Edge Gateway can be deployed in HA configuration - so no downtime.
Would a best practice (to avoid all this in/out confusion) be to use explicit Source/Destination objects, and not *any? If you use explicit Source/Destination you could leave the Direction as in/... See more...
Would a best practice (to avoid all this in/out confusion) be to use explicit Source/Destination objects, and not *any? If you use explicit Source/Destination you could leave the Direction as in/out.
One thing I came across is if you decide to use a single vDS across compute and management clusters, you better ensure that all of the physical uplinks in that vDS carry the vxlan (vtep) vlan. Yo... See more...
One thing I came across is if you decide to use a single vDS across compute and management clusters, you better ensure that all of the physical uplinks in that vDS carry the vxlan (vtep) vlan. You cannot specifically override which physical uplinks get vxlan. VXLAN will arbitrarily use any physical uplink on the vDS. So, if for example you have dedicated uplinks for hypervisor management, iscsi connectivity, vMotion, you need to ensure the vxlan vlan exists on those uplinks. That's easy enough with vlan trunking (802.1q), but if you want physical separation then use a second vDS with 2 (or more) uplinks dedicated to vxlan (vtep).
Hi Bayu, You solved it! I have 2 clusters (mgmt, compute). I had created a single vDS that included all hosts from both clusters. I was running into issues with distributed logical routing bet... See more...
Hi Bayu, You solved it! I have 2 clusters (mgmt, compute). I had created a single vDS that included all hosts from both clusters. I was running into issues with distributed logical routing between hosts because vxlan networks were being placed on physical uplinks that had non-vxlan vlans. So, I created 2 vDS as you suggested. The second vDS just has 2 uplinks and strictly dedicated to vxlan (i.e. backed by physical vlan for vxlan). When I prepared each cluster for vxlan, i pointed to this second vDS to use. So, now all vxlan vm's connect to the correct vDS and uplinks, and the vdr-vdrPort also connects to the same vDS and uplinks. I tested and now I can successfully ping from one vxlan subnet to a different vxlan subnet across esxi hosts. Thanks for your help. Much appreciated!