JaSo2's Posts

Did I get it right that you are directly connecting the devices to physical ports on the server? Like there is no switch between your devices? (not sure if the virtual switch let it through - I'm rea... See more...
Did I get it right that you are directly connecting the devices to physical ports on the server? Like there is no switch between your devices? (not sure if the virtual switch let it through - I'm really thinking if I have seen smt like that, but no luck). If the answer is yes, then the question would be why you need VLANs, when you have the devices directly connected? The devices connected directly would have to understand 802.1Q (VLANs) too.
Maybe dumb question (edit - yes dumb question, sorry missed your previous answer regarding the Vyos router), but do you have route redistribution enabled for BGP and do you have anything to advertise... See more...
Maybe dumb question (edit - yes dumb question, sorry missed your previous answer regarding the Vyos router), but do you have route redistribution enabled for BGP and do you have anything to advertise? :): If there is T1 with connected segments, it should have the route advertisment enabled too. It also seems that you are running the T0 in Active-Passive mode, where the 10.18.163.25 Edge is the passive one (guessing by the AS Path prepend) - iirc this Edge won't advertise routes, you should check it on the active one - this will be just in standby mode with Established session to minimize the downtime in case the Active Edge (or better the Edge with Active T0 logical router) fails in any way. You can also check log file on the edge by running get log-file routing.   J.
N-VDS is still supported on 3.2.x (and go for 3.2.3 rather than 3.2.1). It is not supported for 4.x and you won't be able to upgrade to 4.x if you are using N-VDS.
This is the UUID you are looking for - click the three dots next to host transport node: Or simply run API call /policy/api/v1/transport-nodes/ and this way you get a list of all nodes. For me ... See more...
This is the UUID you are looking for - click the three dots next to host transport node: Or simply run API call /policy/api/v1/transport-nodes/ and this way you get a list of all nodes. For me worked DELETE /api/v1/transport-nodes/<transport-node-id> + identifying the hosts in Standalone hosts in Orphaned state, than I have force deleted them (it takes some time). J.
Hello, Great :). Regarding renaming of the profiles - yes, there should be no functional problem with renaming them. You just have to try it, if NSX let you do it while it is already applied somewh... See more...
Hello, Great :). Regarding renaming of the profiles - yes, there should be no functional problem with renaming them. You just have to try it, if NSX let you do it while it is already applied somewhere (but with profiles I think it should be possible). J.
Hello, Yes, it looks pretty straight forward, but it can be tricky: - If you have DRS fully automated, then nothing else is needed and everything will be done automagically from the maintenance mod... See more...
Hello, Yes, it looks pretty straight forward, but it can be tricky: - If you have DRS fully automated, then nothing else is needed and everything will be done automagically from the maintenance mode (MM) perspective. - You can rename VDS afterwards - after all it is "just" an object. Mind the fact there will be also an uplink portgroup with the similar automatically generated name. The tricky part not mentioned in the docs (at least not the last time I checked) that bit me and turned out not to be a bug (support confirmed) is default timeout for leaving MM of the vds-migrate command. It is 300 seconds - meaning if your running VMs are not migrated off your host in 300 seconds you are running into serious problems. There is an option for the migration command to specify MM timeout (switch maintenance-mode) to modify this default behavior - I recommend checking it in the CLI. If I were you I would go with the API path - it is much more in your control and you can modify the VDS name in the API call along the way. J.
From your description I guess you have your T0 in Active-Active mode, while T1 in Active-Standby (by design)? NSX version is?
I can confirm that the 3-tier-app posted by Shahab works, albeit it can be a little more difficult to deploy (time wise). I have used it for something similar (customer presentation / acceptance test... See more...
I can confirm that the 3-tier-app posted by Shahab works, albeit it can be a little more difficult to deploy (time wise). I have used it for something similar (customer presentation / acceptance tests), but I would choose a more simplistic approach next time. VMware has in labs a 2 tier also, which may be sufficient, but I'm not sure if it has been released to public / described somewhere...
Looking at product download page I do not see kernel vibs for ESXi 8, so I would guess it is not supported (yet?). Also Interop matrix is not so happy about this combination:    
Yes, that is possible and it seems as a way to go (it is in the docs - the link that I have posted):   If you deploy a single NSX Manager or Global Manager in a production environment, you can als... See more...
Yes, that is possible and it seems as a way to go (it is in the docs - the link that I have posted):   If you deploy a single NSX Manager or Global Manager in a production environment, you can also enable vSphere HA on the host cluster for the manager nodes to get automatic recovery of the manager VM in case of ESXi failure. For more information on vSphere HA, see the vSphere Availability guide in the vSphere Documentation Center.  
It is possible with command: set hostname hostname.example.com I have just tried it on 3.2.2 and it worked (rebooted the appliance in question, but I don't know if it is necessary) - also after the... See more...
It is possible with command: set hostname hostname.example.com I have just tried it on 3.2.2 and it worked (rebooted the appliance in question, but I don't know if it is necessary) - also after the appliance was back up, it took a while before it was reflected by the cluster status information. Other way would be to delete the appliance and recreate it with FQDN, but I guess this is faster :).
Yes, it does (I would go with Medium size). More is written here https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-10CF4689-F6CD-4007-A33E-A9BCA873DA8A.html - for production d... See more...
Yes, it does (I would go with Medium size). More is written here https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-10CF4689-F6CD-4007-A33E-A9BCA873DA8A.html - for production deployment it is recommended to have 3, but it will run also with 1. Not sure what support will tell you though if you mark your environment as production and have only 1 NSX Manager appliance there :). Mind the fact that if you do not have connectivity from the TNs to the Manager node your data plane works, but management and control does not - you won't be able to do any configuration changes and vMotion VMs between hosts / connect them to different logical segments etc.
Something similar happened to me in my lab - I have found the host among Standalone hosts in NSX Manager GUI and was able to Force delete it that way. After that It was possible to add the old-new ho... See more...
Something similar happened to me in my lab - I have found the host among Standalone hosts in NSX Manager GUI and was able to Force delete it that way. After that It was possible to add the old-new host under the same name / IP etc.
Basically there are 3 ways how to do it: GUI CLI (from Manager node) API GUI - I have crossed out this options even in testing phase (lab) - did not work properly for me. CLI can be ok, but th... See more...
Basically there are 3 ways how to do it: GUI CLI (from Manager node) API GUI - I have crossed out this options even in testing phase (lab) - did not work properly for me. CLI can be ok, but there is a big BUT. For the migration the process is more or less about preparing a VDS according to N-VDS configuration, preparing uplink profile, putting host to MM, migrate the host to VDS and taking it out of MM. The problem is in putting the host to MM - there is a default timer of 300 seconds and if the host is not in MM by that time it moves to next host and it does not care about the state of the cluster from the resource perspective. There is a possibility to prolonger this timer with option maintenance-timeout like "vds-migrate esxi-cluster-name cluster maintenance-timeout 60", but I simply do not like the fact how it works. I really recommends the API way - it is semi manual - you have to put the host to MM and take it out, but it is much more controlled by you and it have some other possibilities like renaming VDS to a name corresponding with your naming convention at the time of its creation. Other then that what I have noticed - the NVDS should have the same exact configuration at each host (e.g. LLDP disabled / enabled everywhere), otherwise multiple VDS will be created (but that is noted in the documentation - https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/administration/GUID-1039A36F-F55E-4A0A-B6C6-2C383F4A716D.html). I had also trouble with IP collision for TEPs - some hosts were assigned IPs that were already in use (in that case do not take out the host out of MM - if you have DRS, VMs will be automatically moved there but won't communicate due to the collision - tunnels will be down for that host). Simple change to DHCP for the IP Pool, saving and assigning the IP Pool back fixed the issue.
Nope - N-VDS is still supported in 3.2.2 version, it is not supported starting with 4.0.0.1.
In the nested environment starting from this week, I'll be always using the logical segment for Edge TEP. First of all, there is a great KB, which should be mentioned: https://kb.vmware.com/s/articl... See more...
In the nested environment starting from this week, I'll be always using the logical segment for Edge TEP. First of all, there is a great KB, which should be mentioned: https://kb.vmware.com/s/article/83743 From my customer-deployments experience , if I wanted to be 100% that TEPs will be connected I would use 2 VLANs (separate subnets routed over the underlay) - I had 0 problems with that. In nested environment this configuration did not work  and I faced very strange problems on L7 - some HTTP worked perfectly, some did not. Packet tracer showed that for the HTTP that did not work, during the handshake ACK got lost (but just for SOME websites...). I had Edge connected to PG on VDS that was not controlled by NSX, hosts (TEPs) were on VDS controlled by NSX. After I put Edges + Hosts TEPs to one VLAN and used logical segment (supported since 3.1) for Edge TEP and not the portgroup everything started working... I guess there is some problem with GENEVE encapsulation / de-encapsulation in the nested environment (since I have vSwitch of a physical host under it and not a physical NIC).  
I have done this upgrade just recently (on similar configuration which was even in the older Manager API / mode) and it went fine. (Migrating N-VDS to VDS on that particular infra was completely dif... See more...
I have done this upgrade just recently (on similar configuration which was even in the older Manager API / mode) and it went fine. (Migrating N-VDS to VDS on that particular infra was completely different story though :), but that's not your case.)   Of course follow instructions from Prashant (Interop, RN check and follow the Upgrade guide - I have done every 3.2 upgrade with the help of NSX Upgrade Evaluation tool, to be sure the DB is ready, but you can check that in the Upgrade guide / docs).
Hello, A question regarding support for Ubuntu 20.04 - when it is planned to add a support for this version of the OS? Thank you, J,
Situation - Management cluster deployed in 2+1 mode accross 2 sites in different subnets. It is possible that ESXi TNs are served by Management cluster nodes in a remote DC. Failure - failure on the... See more...
Situation - Management cluster deployed in 2+1 mode accross 2 sites in different subnets. It is possible that ESXi TNs are served by Management cluster nodes in a remote DC. Failure - failure on the interconnect between site (creating a split brain). Will it work in a way that the Transport nodes will connect to reachable Management cluster node locally, the single node won't be accessible from UI / API and configuration automatically sync between the nodes when the connectivity between sites recovers?    
@Biowarezwell if you are not willing to tshoot the issue like described in the second post (my response), then you can try to install the 3.1 - it could be also a faster option in your case since I g... See more...
@Biowarezwell if you are not willing to tshoot the issue like described in the second post (my response), then you can try to install the 3.1 - it could be also a faster option in your case since I guess you do not have that much configured inside NSX-T yet anyway J.