p0wertje's Accepted Solutions

You should be able to add volumes: - name: containerd mountPath: /var/lib/containerd capacity: storage: 64Gi under 'workers:' As described in https://docs.v... See more...
You should be able to add volumes: - name: containerd mountPath: /var/lib/containerd capacity: storage: 64Gi under 'workers:' As described in https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-4E68C7F2-C948-489A-A909-C7A1F3DC545F.html 
What kinda re-distribution are you looking for ?    
Are you using that teaming-1 and teaming-2 ? It look like something goes wrong there. Did you assign that teaming to the vlans you use for the connection to the leaf's ?
Hi, I see no issue here. In this example you see 2 vms, they can ping each other. Can you ping the ipv6 link address ? (fe80::xxxxx) Have you configured by hand ? I do get Network unreachable if y... See more...
Hi, I see no issue here. In this example you see 2 vms, they can ping each other. Can you ping the ipv6 link address ? (fe80::xxxxx) Have you configured by hand ? I do get Network unreachable if you try to ping an address that is not in the same ipv6 subnet. (i do not have an default ipv6 gateway)  
Hi,   It is not possible to connect more then 1 logical-router to the same segment. The only thing you could do is connect it over the service interface. But for the service interface you need a S... See more...
Hi,   It is not possible to connect more then 1 logical-router to the same segment. The only thing you could do is connect it over the service interface. But for the service interface you need a SR From the design guide: Service Interface: Interface connecting VLAN segments to provide connectivity and services to VLAN backed physical or virtual workloads. Service interface can also be connected to overlay segments for Tier-1 standalone load balancer use-cases explained in Load balancer Chapter 6 . This interface was referred to as centralized service port (CSP) in previous releases. Note that a gateway must have a SR component to realize service interface. NSX-T 3.0 supports static and dynamic routing over this interface.
Hi,   I think you can just do it in the http profile. When testing on a port 80, i get a 301 redirect.    
Hi, A good place to start is here: VMware® NSX-T Reference Design - VMware Technology Network VMTN NSX-T LB Encyclopedia - VMware Technology Network VMTN And there is also an option for the NSX... See more...
Hi, A good place to start is here: VMware® NSX-T Reference Design - VMware Technology Network VMTN NSX-T LB Encyclopedia - VMware Technology Network VMTN And there is also an option for the NSX advanced Loadbalancer NSX Advanced Load Balancer (by Avi Networks) Encyc... - VMware Technology Network VMTN Especially the NSX-T reference design guide gives a good overview of the different options    
According to the design guide VMware® NSX-T Reference Design - VMware Technology Network VMTN "Services like NAT are in constant state of sync between active and standby SRs on the Edge nodes" page... See more...
According to the design guide VMware® NSX-T Reference Design - VMware Technology Network VMTN "Services like NAT are in constant state of sync between active and standby SRs on the Edge nodes" page 83 Active/Standby – This is a high availability mode where only one SR act as an active forwarder. This mode is required when stateful services are enabled. Services like NAT are in constant state of sync between active and standby SRs on the Edge nodes. This mode is supported on both Tier-1 and Tier-0 SRs. Preemptive and Non-Preemptive modes are available for both Tier-0 and Tier-1 SRs. Default mode for gateways configured in active/standby high availability configuration is non-preemptive. So if there is a failover, the standby takes over with all active connection. I have checked on the SR it self (get firewall connection state) and all the same connections are also on the standby SR.   Does this answer your question ?  
Hi, I do not think you need to do a nic teaming in the os. As long as you have proper nic teaming on the physical nics to the physical switch you should be good to go. See this post with a sim... See more...
Hi, I do not think you need to do a nic teaming in the os. As long as you have proper nic teaming on the physical nics to the physical switch you should be good to go. See this post with a simular question https://communities.vmware.com/t5/ESXi-Discussions/nic-teaming-in-a-guest/td-p/2255896
They are working on it. It will be available soon
Hi, No. Not directly. Only routed over t1/t0 Or using a bridge https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/nsxt_30_admin.pdf page 69 Layer 2 Bridging With layer 2 bridging,... See more...
Hi, No. Not directly. Only routed over t1/t0 Or using a bridge https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/nsxt_30_admin.pdf page 69 Layer 2 Bridging With layer 2 bridging, you can have a connection to a VLAN-backed port group or a device, such as a gateway, that resides outside of your NSX-T Data Center deployment. A layer 2 bridge is also useful in a migration scenario, in which you need to split a subnet across physical and virtual workloads. A layer 2 bridge requires an Edge cluster and an Edge Bridge profile. An Edge Bridge profile specifies which Edge cluster to use for bridging and which Edge transport node acts as the primary and backup bridge. When you configure a segment, you can specify an Edge bridge profile to enable layer 2 bridging. Or use Configuring Bare Metal Server to Use NSX-T Data Center But you are limited to some os Bare Metal Server Requirements        Operating System Version CPU Cores Memory CentOS Linux 7.7, 7.6 (kernel: 3.10.0-957) 4 16 GB Red Hat Enterprise Linux (RHEL) 7.7 7.6 (kernel: 3.10.0-957) 4 16 GB Oracle Linux 7.7 7.6 (kernel: 3.10.0-957) 4 16 GB SUSE Linux Enterprise Server 12 sp3, 12 sp4 4 16 GB Ubuntu 16.04.2 LTS (kernel: 4.4.0-*) 18.04 4 16 GB Windows Server 2016 4 16 GB
Hi, Sounds like a firewall issue. Is the manager able to reach the esxi hosts ? See VMware Ports and Protocols  for port requirements
Hi. The reply i got from vmware: On an earlier version of NSX-T the concept and function of Domains was introduced and was present on the UI. However a product decision was made to explici... See more...
Hi. The reply i got from vmware: On an earlier version of NSX-T the concept and function of Domains was introduced and was present on the UI. However a product decision was made to explicitly remove this from the UI from 2.4.1 onwards . The plan being to perform an internal assessment of the role of Domains and to ensure they would be future proofed from a product roadmap perspective, Federation etc. A decision was made to leave the Domain API as fully functional. So not available in gui. But it is available in API
Hi, No it is not possible in this version. Only 1 allowed. For more robust solutions you should use Identity Manager (i had the same question and i asked vmware via support)
According to documentation you cannot do that. It makes sense, because edges are deployed and managed by the nsx manager (via the vsphere-webclient) Only on the standalone edge you can use th... See more...
According to documentation you cannot do that. It makes sense, because edges are deployed and managed by the nsx manager (via the vsphere-webclient) Only on the standalone edge you can use the password command on cli. Because that is not managed by nsx. The other way to do it is via api. See https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.4/nsx_64_api.pdf  page 364
Hi, A route is normally not enough for natting. Depening on your router brand you most likely need to enable nat somewhere. You only route 10.1.0.0/24 now. But you have subnets in 10.1.10.0/2... See more...
Hi, A route is normally not enough for natting. Depening on your router brand you most likely need to enable nat somewhere. You only route 10.1.0.0/24 now. But you have subnets in 10.1.10.0/24. Better to user 10.1.0.0/16
Hi, There the SR comes in play. The SR takes care of those services. You need edge nodes to run SR (they can be virtual) 4.1 Tier-0 Gateway consists of two components: distributed routin... See more...
Hi, There the SR comes in play. The SR takes care of those services. You need edge nodes to run SR (they can be virtual) 4.1 Tier-0 Gateway consists of two components: distributed routing component (DR) and centralized services routing component (SR). 4.1.2 Services Router East-West routing is completely distributed in the hypervisor, with each hypervisor in the transport zone running a DR in its kernel. However, some services of NSX-T are not distributed, including, due to its locality or stateful nature: ● Physical infrastructure connectivity ● NAT ● DHCP server ● Load Balancer ● VPN ● Gateway Firewall ● Bridging ● Service Interface ● Metadata Proxy for OpenStack A services router (SR) – also referred to as a services component – is instantiated when a service is enabled that cannot be distributed on a gateway.  A centralized pool of capacity is required to run these services in a highly-available and scale-out fashion. The appliances where the centralized services or SR instances are hosted are called Edge nodes. An Edge node is the appliance that provides connectivity to the physical infrastructure.
Found the problem (thx to vmware support!) I made the <nodename> ncp/node_name and ncp_cluster on the VM name instead of the interface/logical switch. Because of this the hyperbus was unhealt... See more...
Found the problem (thx to vmware support!) I made the <nodename> ncp/node_name and ncp_cluster on the VM name instead of the interface/logical switch. Because of this the hyperbus was unhealthy. On the esx host in 'nsxcli' you can type 'get hyperbus connection info' this showed nothing. That was esaclty the reason why i got a connection refused. After changing the tagging the hyperbus was healthy and everything works. xxxxx.infra.test> get hyperbus connection info                 VIFID                            Connection                         Status 198c008e-dc61-406e-bf75-688c4dae0a24         169.254.1.12:2345                     HEALTHY 4601b498-1ee7-4232-8ead-a70663a221e1         169.254.1.11:2345                     HEALTHY Also the nsx-agent-node gives a healthy now: kubectl exec -n nsx-system -it  nsx-node-agent-hclhs  nsxcli Defaulting container name to nsx-node-agent. Use 'kubectl describe pod/nsx-node-agent-hclhs -n nsx-system' to see all of the containers in this pod. NSX CLI (Node Agent). Press ? for command list or enter: help k8s-node01> get node-agent-hyperbus status HyperBus status: Healthy k8s-node01> --------------------------------------------------------------------------------------------------------- Was it helpful? Let us know by completing this short survey here.
Hi, There is some info in the designguide about MTEP http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmw-nsx-network-virtualization-design-guid… page 29 Hybr... See more...
Hi, There is some info in the designguide about MTEP http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/nsx/vmw-nsx-network-virtualization-design-guid… page 29 Hybrid Mode Hybrid Mode offers operational simplicity similar to Unicast Mode (no IP Multicast Routing configuration required in the physical network) while leveraging the Layer 2 Multicast capability of physical switches. This is illustrated in the example in Figure 29, where it can also be noticed how the specific VTEP responsible for performing local replication to the other VTEPs part of the same subnet is now named “MTEP”. The reason is that in Hybrid Mode the [
Hi, It seems the version is 6.2.48886 after the upgrade. (i got the same) the 4.0.6 you see is probably the nicira nvp version number. As you may know Nicira is aquired by vmware a couple of... See more...
Hi, It seems the version is 6.2.48886 after the upgrade. (i got the same) the 4.0.6 you see is probably the nicira nvp version number. As you may know Nicira is aquired by vmware a couple of years ago. -- Chris