p0wertje's Posts

Hi, I think this is what you mean: Default Gateway (IP Routing on Avi SE) (avinetworks.com) Please keep in mind that Basic version has a lot of limitations. I am not sure if this option works in ... See more...
Hi, I think this is what you mean: Default Gateway (IP Routing on Avi SE) (avinetworks.com) Please keep in mind that Basic version has a lot of limitations. I am not sure if this option works in Basic edition. Documentation says: Limited: Only default gateway for applications is supported.    
Hello,   Please refer to Ecosystem Support | Avi Vantage | Knowledge Base (avinetworks.com) "Also, the deployment of the controller and service engine OVA directly on an ESX host (for a no-access ... See more...
Hello,   Please refer to Ecosystem Support | Avi Vantage | Knowledge Base (avinetworks.com) "Also, the deployment of the controller and service engine OVA directly on an ESX host (for a no-access or other cloud-connector types) is NOT supported. They must be deployed using the vCenter UI, as the deployment requires Open Virtualization Format (OVF) properties to be configured."      
That is good news Weird that it worked but showed as down..
What does the status say when you log in on the leader and type "show cluster detail" | cluster_vip_runtime_status | | | cluster_vip_status | CLUSTER_VIP_ACTIVE | | last_update | 2023-06-27 11:52:... See more...
What does the status say when you log in on the leader and type "show cluster detail" | cluster_vip_runtime_status | | | cluster_vip_status | CLUSTER_VIP_ACTIVE | | last_update | 2023-06-27 11:52:44 | | status_message | Cluster VIP configured successfully. |
What version of AVI are you using? And what license tier are you using ? Basic or Enterprise?
Hi,   You can try this: mkdir ~/.ansible/collections/ansible_collections/vmware cd ~/.ansible/collections/ansible_collections/vmware git clone https://github.com/vmware/ansible-for-nsxt ansible_fo... See more...
Hi,   You can try this: mkdir ~/.ansible/collections/ansible_collections/vmware cd ~/.ansible/collections/ansible_collections/vmware git clone https://github.com/vmware/ansible-for-nsxt ansible_for_nsxt And use this in your playbook collections: - vmware.ansible_for_nsxt   Hope this helps
Hi, Most of the time it is a configuration setting on the customer side. i.e enable it on the bgp session to the nsx-t t0. Best place to ask is the network department on the customer side. Or just... See more...
Hi, Most of the time it is a configuration setting on the customer side. i.e enable it on the bgp session to the nsx-t t0. Best place to ask is the network department on the customer side. Or just turn bfd on nsx-t and see if the bfd session comes up. (Do not just blindly do this on production) cli commands for edge: get bfd-sessions get bgp neighbor <ip-address> In the end it shows i.e 'BFD Status: peer 50.50.50.10 status down'  
Hi Not sure if nsx lb has an option of 'proxy_pass' as nginx has. What you could do: Normally what you can do is select a pool when you go to a specific URI i.e http://www.example.com/site1 -> po... See more...
Hi Not sure if nsx lb has an option of 'proxy_pass' as nginx has. What you could do: Normally what you can do is select a pool when you go to a specific URI i.e http://www.example.com/site1 -> pool 1 (with i.e a webserver serving this site1) And http://www.example.com/site2 -> pool 2 (with webserver service site2) The URL will stay the same, but the backend will be different.
Hi,   Please be aware that the Native NSX load balancer is deprecated. I do not completely understand what you want to achieve. Is it just a rewrite from http to https ?
Hi, Are the manager/edge-nodes/vcenter in the same subnet? (you don't have to, but just to get a better understanding of your setup.) Is NSX already installed on the hosts where put your edge-node ... See more...
Hi, Are the manager/edge-nodes/vcenter in the same subnet? (you don't have to, but just to get a better understanding of your setup.) Is NSX already installed on the hosts where put your edge-node on ?(If so, check the DFW rules) It sounds like a firewall or routing issue. i.e different subnets and your gateway applying nat. Can you tell a bit more about your setup? A small diagram maybe?
Hi,   What you can try to do: Use web developer in your browser.(Network tab) Create the VS by hand and when you press save, checkout the POST to /api/marco From there you can reconstruct to use... See more...
Hi,   What you can try to do: Use web developer in your browser.(Network tab) Create the VS by hand and when you press save, checkout the POST to /api/marco From there you can reconstruct to use in your own API call.
You can get the logs by using  kubectl logs <POD name>  -n projectcontour
I am running a bit out of ideas. Maybe a stupid question (but I got to ask) Is it not the nsx firewall blocking things?
The service looks fine. The Warning you get is the first it shows. Then it moves to ensuring and last ensured How are the pods ? kubectl get pods -owide  -n projectcontour
What is AVI saying why it is down ?
Does  kubectl get services -n projectcontour Show an external IP on projectcontour-envoy? It might be this: (taken from shanks website) The Service Name needs to be correct or else the deploy... See more...
Does  kubectl get services -n projectcontour Show an external IP on projectcontour-envoy? It might be this: (taken from shanks website) The Service Name needs to be correct or else the deployment will fail almost immediately. From what I can see, a DNS lookup is performed with the FQDN entered, and whatever IP address comes back is added as the ingress / contour / envoy load balancer IP. So ensure you create an appropriate DNS entry and IP address. In my case, I have created a DNS entry in the vip-tkg range. The service name you use when installing NAPP needs to resolve to an IP that is in the VIP range you configured (i assume you use AVI, since you follow the blog) NSX Application Platform Part 3: NSX-T, NSX-ALB (Avi), and Tanzu (lab2prod.com.au)
You might be able to see why it fails with kubectl kubectl get pods -owide  -n projectcontour And see if a pod is in Error state kubectl describe pods <POD name>  -n projectcontour Screenshots ... See more...
You might be able to see why it fails with kubectl kubectl get pods -owide  -n projectcontour And see if a pod is in Error state kubectl describe pods <POD name>  -n projectcontour Screenshots and logs would be useful
H, Yes, you need edge nodes to run the services on (called T0 and T1) You can use loadbalancing, natting, ipsec vpn. The edge nodes can be Baremetal or Virtual. You can use static routes if you wa... See more...
H, Yes, you need edge nodes to run the services on (called T0 and T1) You can use loadbalancing, natting, ipsec vpn. The edge nodes can be Baremetal or Virtual. You can use static routes if you want. You might want the put the T0 in Active-standby and use HA (HA vip) For fast failover, it is advised to use bgp+bfd. But it depends on your needs. You can even use OSPF if you want.    
Try this: By default, the root login password is vmware, and the admin login password is default
Yes. according to the documentation, you should add that for etcd on the control. I had some issues doing it. And napp was running fine with only on the workers