The last step in the documentation for setting up SNMP says that you need to open the firewall to allow this: 8. Configure Firewall Settings. After you have configured SNMP Settings, go to Firewall ...
See more...
The last step in the documentation for setting up SNMP says that you need to open the firewall to allow this: 8. Configure Firewall Settings. After you have configured SNMP Settings, go to Firewall settings (Configure >Profiles > Firewall) to configure the Firewall settings that will enable your SNMP settings. Have you done this? By default all incoming traffic is blocked by the edge firewall.
If you want VM mobility and redundancy using public clouds I would say NSX Cloud is not the best, as this is for native clouds. You can't simply take your workloads there. From what you said it seams...
See more...
If you want VM mobility and redundancy using public clouds I would say NSX Cloud is not the best, as this is for native clouds. You can't simply take your workloads there. From what you said it seams that a VMware Cloud on a hyperscaler like VMC on AWS is the best fit. These clouds have HCX that give you the mobility you need and you can leverage replication with it or SRM or use services like DRaaS depending on what you need.
Changing ports is required when using NSX-V, as both Openshift SDN and NSX-V user VxLAN with port 4789. NSX-T uses GENEVE with port 6081, so I don't see any need to change anything. Red Hat's docume...
See more...
Changing ports is required when using NSX-V, as both Openshift SDN and NSX-V user VxLAN with port 4789. NSX-T uses GENEVE with port 6081, so I don't see any need to change anything. Red Hat's documentation says they use VxLAN with port 4789: Installing a cluster on vSphere with user-provisioned infrastructure and network customizations - Installing on vSphere | Installing | OpenShift Container Platform 4.6 Seems they kept old text from NSX-V days in the newer docs with NSX-T.
What are the IPs of the VMs and their default gateway? Looking carefully at everything you sent it seems that the VMs are using .1 or .2 in the last octet and they are configured with .254 as the...
See more...
What are the IPs of the VMs and their default gateway? Looking carefully at everything you sent it seems that the VMs are using .1 or .2 in the last octet and they are configured with .254 as their default gateway. If this is correct you have a problem with your Tier1s as they are also configured with .1 instead of with .254. In the drawing you sent in the initial post it also shows that the default gateway is .254. If my findings are correct you should change the IP under each segment to .254, as this is the IP of the Tier1 attached to that segment.
It seams you did not finish the last part of the configuration called "Teaming Policy Switch Mapping" In this part you have to map the NSX Uplinks you defined in the uplink profile (uplink-1 and upl...
See more...
It seams you did not finish the last part of the configuration called "Teaming Policy Switch Mapping" In this part you have to map the NSX Uplinks you defined in the uplink profile (uplink-1 and uplink-2 in your picture) to the uplinks of your vDS. At the bottom of the screen you attached there is a column called "Uplinks" (these are the NSX uplinks) and you need to fill in the other column called "VDS Uplinks". Click on the blank spaces in this second column and choose the appropriate VDS uplink to finish the configuration. Had you done this already?
This issue is documented here: VMware NSX for vSphere 6.4.1 Release Notes Issue 2130563: Warning message appears when assigning NSX Data Center license: "The selected license does not support...
See more...
This issue is documented here: VMware NSX for vSphere 6.4.1 Release Notes Issue 2130563: Warning message appears when assigning NSX Data Center license: "The selected license does not support some of the features that are currently available to the licensed assets" If you have an NSX for vSphere license assigned, and then assign an NSX Data Center license, you see the following warning message: "The selected license does not support some of the features that are currently available to the licensed assets". This is because the two licenses define the NSX features differently. If you are assigning a license edition that licenses the same features as your current license, it is safe to ignore this message. See VMware knowledge base article 2145269 for more information about NSX licenses. Workaround: Verify the new license supports the feature you need, and ignore the warning message.
The answer to your question depends on the ESXi version. If you use vSphere 7 + VDS 7 you will not need additional NICs nor N-VDS, as NSX-T can leverage the vDS to create NSX segments. If you ...
See more...
The answer to your question depends on the ESXi version. If you use vSphere 7 + VDS 7 you will not need additional NICs nor N-VDS, as NSX-T can leverage the vDS to create NSX segments. If you use N-VDS then it needs NICs, either additional or migrated from the vDS. Either way you need an NSX host switch (N-VDS or vDS 7+), configured using host preparation using only a VLAN Transport Zone. With this setup you can leverage NSX-T security features by simply creating segments that map to the same VLANs as the vDS port groups and migrate VMs to these segments. In NSX-T 3.0 a wizard has been created exactly for this use case. Check this blog post that shows it: https://vdives.com/2020/05/20/nsx-t-3-0-lab-micro-seg-only-deployment-wizard/
Since NSX-T 3.0 vShield is the default license: The default license upon install is "NSX for vShield Endpoint", which enables use of NSX for deploying and managing vShield Endpoint for anti-vi...
See more...
Since NSX-T 3.0 vShield is the default license: The default license upon install is "NSX for vShield Endpoint", which enables use of NSX for deploying and managing vShield Endpoint for anti-virus offload capability only. This is from the release notes: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/rn/VMware-NSX-T-Data-Center-30-Release-Notes.html
Yes, you can do that. With these steps: 1) Create a prefix-list that matches the segment you want; 2) Create a route-map that matches on this prefix-list and sets the community you want; 3) ...
See more...
Yes, you can do that. With these steps: 1) Create a prefix-list that matches the segment you want; 2) Create a route-map that matches on this prefix-list and sets the community you want; 3) Add this route-map as a route filter in the outbound direction on your T0 BGP neighbor. This will insert the community in routes sent to the BGP neighbor in the physical network. Just be careful with route-map and logic so that you don't filter out other routes.
I didn't quite get your specific doubt, but lets say you have an overlay segment on network 192.168.1.0/24 connected to T1-A with IP 192.168.1.1 as default gateway. Now you have a physical ser...
See more...
I didn't quite get your specific doubt, but lets say you have an overlay segment on network 192.168.1.0/24 connected to T1-A with IP 192.168.1.1 as default gateway. Now you have a physical server on VLAN 10 on subnet 192.168.10.0/24 whose default gateway is 192.168.10.1. You can configure T1-A to be the default gateway of VLAN 10 with the following steps: 1) create a new segment on a VLAN transport zone that is available on the edge nodes and configure it with VLAN 10; 2) edit your T1-A gateway and add a Service Interface; 3) configure this interface with 192.168.10.1/24 IP address and connect it to the segment created on step 1. With this you have a T1 that on one interface is default gateway of the overlay segment and on the other is the default gateway of VLAN 10. Now all you have to do is configure gateway firewall rules so that VLAN 10 can only access what you want.
For connecting workloads to VLAN backed segments on NSX-T they have to be on an NSX-T managed switch, this means either N-VDS or VDS 7+. Since workloads run on the host, these segments go on the ...
See more...
For connecting workloads to VLAN backed segments on NSX-T they have to be on an NSX-T managed switch, this means either N-VDS or VDS 7+. Since workloads run on the host, these segments go on the VLAN TZ assigned to the host transport nodes. A good practice is to use a different VLAN TZ for the edges so when you create the segments for the T0 uplinks they are only available inside the edge and you don't see them in vCenter.
If you are using vDS 7.0 you do not configure the MTU in NSX-T, as this is already defined in vCenter. That field in the uplink profile only applies to N-VDS. Just leave it blank.
NSX-T does not lock out accounts that are not local. If you are using LDAP directly with NSX-T 3.0 (no vIDM) be sure to use username@domain format to login. If you do not append the @domain NSX-T...
See more...
NSX-T does not lock out accounts that are not local. If you are using LDAP directly with NSX-T 3.0 (no vIDM) be sure to use username@domain format to login. If you do not append the @domain NSX-T will attempt to authenticate locally and it since the user does not exist it will give a message saying the user/password combination is incorrect or the account has been locked.
There seems to be some tagging issue. Make sure the tag of the TEP VLAN is configured on all ESXI host uplinks and in the connection between ToR switches.
There is a new view for tags under inventory on NSX-T 3.0 that is an improvement. Are your needs regarding tagging related to the UI or API? In Policy API you can see all VMs with a certain ta...
See more...
There is a new view for tags under inventory on NSX-T 3.0 that is an improvement. Are your needs regarding tagging related to the UI or API? In Policy API you can see all VMs with a certain tag using a GET https://nsxapp-01a.corp.local/policy/api/v1/infra/tags/effective-resources?tag=test123 I tried this and it returns all VMs that have the tag test123. You can also list all tag with a GET https://nsxapp-01a.corp.local/policy/api/v1/infra/tags/ If you give some more details on what you are trying to achieve we may be able to help.
The main benefit from Federation is that you have independent managers in each site. Take a look at the Multisite doc here: NSX-T Multisite You will not have to recover managers on site failu...
See more...
The main benefit from Federation is that you have independent managers in each site. Take a look at the Multisite doc here: NSX-T Multisite You will not have to recover managers on site failure. Federation does come with some limitations and not everything that works with Multisite is available with Federation. There is also a licensing impact, as Federation requires Enterprise Plus and Multisite is available in Advanced. At least as of today I would recommend Multisite if it satisfies all your requirements and the recovery options it provides are enough for your needs.
You can use 2 gateways, but they will be treated as ECMP. There is no control over which next-hop is used based on the source. They will both be used and traffic will be distributed on both.
If starting to use NSX go for NSX-T, as NSX-V has an announced EOS. If all you want is to isolate VMs the easiest way is to use the distributed firewall. It has no dependencies on overlay rout...
See more...
If starting to use NSX go for NSX-T, as NSX-V has an announced EOS. If all you want is to isolate VMs the easiest way is to use the distributed firewall. It has no dependencies on overlay routing. DFW uses groups for rules which can have specific criteria, so you can essentially isolate VMs without even having to call an API. If you want to check something outside of NSX environment and act upon this I think the easiest way to isolate a VM would be to have a DFW rule that matches on VMs with a specific tag setup with the desired isolation. When you effectively want to isolate the VM just send an API call to tag the VM and the DFW rule will start acting. Remove the tag and you remove isolation.
Edge nodes are pool of resources for T0 gateways and for T1 gateways if you want stateful services like NAT, LB, Gateway Firewall on them. You size them according to your environment and in case ...
See more...
Edge nodes are pool of resources for T0 gateways and for T1 gateways if you want stateful services like NAT, LB, Gateway Firewall on them. You size them according to your environment and in case they have T0 gateway you can connect them to whichever physical routers you want and should design your network to fit your needs. Your edge clusters can be separate for different services if you want, this is not a problem. Each edge node can only have 1 T0.