All Topics

When trying to deploy NAPP whit the 'NSX Application Platform Automation' appliance I get the following error when the guest cluster gets created :  Error from server (unable to find a compatible fu... See more...
When trying to deploy NAPP whit the 'NSX Application Platform Automation' appliance I get the following error when the guest cluster gets created :  Error from server (unable to find a compatible full version matching version hint "1.21.6" and default OS labels: "os-arch=amd64,os-name=photon,os-type=linux,os-version=3.0". Existing TKRs may have different OS labels. Please use .spec.distribution.fullVersion (in TKC v1alpha1) or .spec.topology.controlPlane.tkr.reference (in TKC v1alpha2+)): error when creating "napp-deploy-cluster.yml": admission webhook "version.mutating.tanzukubernetescluster.run.tanzu.vmware.com" denied the request: unable to find a compatible full version matching version hint "1.21.6" and default OS labels: "os-arch=amd64,os-name=photon,os-type=linux,os-version=3.0". Existing TKRs may have different OS labels. Please use .spec.distribution.fullVersion (in TKC v1alpha1) or .spec.topology.controlPlane.tkr.reference (in TKC v1alpha2+) Steps before this step are all  successful.  Any ideas what can cause the error ?  
Hi! I've been trying to look for resources regarding NSX Cluster Manager Failure unfortunately I cannot find one specific to my scenario and ff: up questions Design: 3-Node Cluster located in a sing... See more...
Hi! I've been trying to look for resources regarding NSX Cluster Manager Failure unfortunately I cannot find one specific to my scenario and ff: up questions Design: 3-Node Cluster located in a single Rack Plan: Install / Configure a 3 Node NSX-T Manager Cluster, 1 NSX-T Manager node per ESXi Host, 6 10g uplinks only, 2 per esxi host contributed to DVS Question: 1. If all NSX-T Manager Nodes fail a. What will happen to my policies? b. What will happen to my VMs? 2. Given that the manager and the controller is on the same appliance already what's the implication if all the manager node fails? 3. Is there a difference in the outcome if you are using VDS as compared if you are using an N-VDS? Hope someone could help! Thank you!
Background: We have an AVS 2.0 environment running with many subnets/segments in Azure and need to migrate some VMs from the cloud back to an on-prem vcenter already connected via HCX. VM migrations... See more...
Background: We have an AVS 2.0 environment running with many subnets/segments in Azure and need to migrate some VMs from the cloud back to an on-prem vcenter already connected via HCX. VM migrations work, but we would like to use HCX Network Extension to migrate these VMs on-prem by extending the network subnets/segments in AVS 2.0 to on-prem. I tried to initiate network extension from the cloud, but at "Select source networks for extension to remote site" I am given: "! Network Extension should be initiated from <on-prem service mesh>" and I can only extend the network from the on-prem connector. I can create a network extension from on-prem to AVS 2.0, but that is not what we need. Need to select source network as the cloud network back to on-prem. Is there a way to extend a cloud-based SDCC segment/network to on-prem using HCX? In my searching online I see it is "possible" but no examples other than on-prem->Cloud or Cloud->Cloud.  Alternately looking at duplicating the AVS 2.0 network on-prem and bulk migrating the VMs and dealing with downtime after cutover, but NE would be preferable.
How do you renew a NSX-T self signed certificate that is quickly approaching expiration? I have a NSX-T self signed cert that is going to expire a week from today. Also wondering if it does expire, d... See more...
How do you renew a NSX-T self signed certificate that is quickly approaching expiration? I have a NSX-T self signed cert that is going to expire a week from today. Also wondering if it does expire, does it cause an outage?
I am trying out VMware NSX-T4.1, I want to use the distributed firewall to prohibit the communication between all the virtual machines of my two clusters cluster01 and cluster02, but I did not find t... See more...
I am trying out VMware NSX-T4.1, I want to use the distributed firewall to prohibit the communication between all the virtual machines of my two clusters cluster01 and cluster02, but I did not find the membership conditions based on the cluster level in the "Set Members" , as shown in the figure, I remember that NSX-V can use clusters as source or target objects. Does NSX-T not support this object level? Is there an easy alternative? I don't want to join every virtual machine in the member  
    Dear Team,   During V2T getting below message, could someone please assist how to resolve the same. "DFW Rule ID 1133 with name RSA Auth Access cannot be migrated because the following group... See more...
    Dear Team,   During V2T getting below message, could someone please assist how to resolve the same. "DFW Rule ID 1133 with name RSA Auth Access cannot be migrated because the following grouping object of type: SecurityGroup, value: securitygroup-50, name: NSX_SG_All-VMs was not correctly translated to NSX-T"   Thank you in advance
Whenever I attempt to create a binding map on a segment for a QoS profile I have created, every time I include the "qos-profile-path" field in the body, I receive a bad http request error: Error c... See more...
Whenever I attempt to create a binding map on a segment for a QoS profile I have created, every time I include the "qos-profile-path" field in the body, I receive a bad http request error: Error code: 522003 Error message: "invalid qos profile in segment monitoring profile map" What's weird, is I can do it manually in the UI, but I have thousands of segments to update. . . and that's just not going to fly hahaha. Anyone have any idea on what is going on? NSXT version is 3.2.3
Dear Team  During V2T getting below message, Could someone please asisst how to resolve the same.   "If you have IP-based IDFW rules and do not want traffic to be interrupted during migration, you... See more...
Dear Team  During V2T getting below message, Could someone please asisst how to resolve the same.   "If you have IP-based IDFW rules and do not want traffic to be interrupted during migration, you need to manually create shadow firewall rules and remove them after the migration is completed."   Thank you in advance
    During V2T getting below messgae, how to resolve the same, Please assist. Feedback for 'NSX_SG_www-proxy-access' with id 'securitygroup-14': 'NSX_SG_www-proxy-access' contains 1 exclude member... See more...
    During V2T getting below messgae, how to resolve the same, Please assist. Feedback for 'NSX_SG_www-proxy-access' with id 'securitygroup-14': 'NSX_SG_www-proxy-access' contains 1 exclude member(s). Exclude members are not supported on NSX-T. Please redefine this securitygroup on NSXv by removing all the exclude members."   Thank you in advance
Dear Team, During V2T getting below message   ClusterComputeResource' is not supported in NSX-T. On few DFW rules Cluster is selected, how to mitigate the same. Please assist.   thank you in adv... See more...
Dear Team, During V2T getting below message   ClusterComputeResource' is not supported in NSX-T. On few DFW rules Cluster is selected, how to mitigate the same. Please assist.   thank you in advance
Good day mate, I'm current having an issue with NSX-V 6.4.x version. Let say that currently we have 2 vCenter working in a linked mode(vCenter-A and vCenter-B) Then we have Cloud Director working ... See more...
Good day mate, I'm current having an issue with NSX-V 6.4.x version. Let say that currently we have 2 vCenter working in a linked mode(vCenter-A and vCenter-B) Then we have Cloud Director working on top if these linked vCenters NSX-V is configured on both vCenter with version 6.4.11 and on Cloud Director We've configure the basic component which are NSX manager, NSX controller and deploying NSX edge gateway for most of customers   The problem we facing right now is on vCenter-A we got randomly edge gateway getting hang, the symptoms are as follow: Let says we have 3 VM here 1. Edge gateway VM = 192.168.1.1 2.VM-A = 192.168.1.2 3.VM-B = 192.168.1.3 VMs residing this edge gateway lost connection from internet(public IP are not pingable from my laptop) and from VM cannot ping edge gateway VM On edge gateway VM, ARP connection from another(VM-A and VM-B) using this edge is missing from the ARP output On edge gateway VM, we login to the console and still reaching the internet (8.8.8.8 for testing) On edge gateway VM, can't connect to VM-A and VM-B (ping to 192.169.1.2 and 192.168.1.3 from edge VM is unreachable) VMs residing this edge gateway can't reach to edge gateway (ping to 192.168.1.1 is unreachable) and can't reach internet Note that this only happen on vCenter-A, for vCenter-B has no issue at all What we've done so far is we did upgrade NSX on vCenter-A from 6.4.11 to 6.4.14 (not helping, issue still persist after upgrade)   We do have a workaround is when the issue happen so we got trigger that the public ip is unreachable, the workaround we have list below: Redeploy edge gateway from Cloud Director, and the issue fixed (this option is not permanent, we found some edge gateway having repeat issue, but some not until now) We migrate Edge gateway VM to the same ESX host with the VM and creating a rule for them to make them stay together always(192.168.1.1-3 stay in the same host, this is permanent fix for us right now but not a good idea I know)    We do have a hundred of edge gateway VM on vCenter-A but this happen on one Edge at at time (Another remain stable, only one got issue at a time but different random Edge gateway). More things to know, for vCenter-A and vCenter-B we are having the physical hosts and switches on the same chassis and rack. Most of them are mixing together using the same HW and configuration. But this never happen on vCenter-B.   vCenter version 7.0.3 (Build 20990077)   ESXi version 7.0.3 (20842708)      
Hi VM Support Team, I am unable to look any open ticket in VM support request history, i can see only closed tickets, kindly do the needful to look all the tickets . my Entitlement Account 579712866... See more...
Hi VM Support Team, I am unable to look any open ticket in VM support request history, i can see only closed tickets, kindly do the needful to look all the tickets . my Entitlement Account 579712866 AT&T Thanks  Moniruzzaman Gazi  
Hi there, setting up a nested lab using a single network (same default gateway for all). Multiple physical hosts, not 1. Post edge cluster deployment. Only 2 tunnels come up per edge (2 report down).... See more...
Hi there, setting up a nested lab using a single network (same default gateway for all). Multiple physical hosts, not 1. Post edge cluster deployment. Only 2 tunnels come up per edge (2 report down). Once a VM with a NSX backed network/port group is configured all tunnels from the wags to the hosts report down. Further, when a BGP summary is generated the 169.254.0.xxx addresses show connect and red (not established). To confirm there is an active t1 plumber to the t0. Not sure what I am overlooking, given this connection should transpire between the edges? Given this network is what is used once the SR is connected to the DR. What have I missed?
We recently onboarded a setup that has NSX-T version 3.1 running on a 14-node Cisco Hyperflex cluster. This particular setup hasn’t been upgraded since its inception, and we're now planning its first... See more...
We recently onboarded a setup that has NSX-T version 3.1 running on a 14-node Cisco Hyperflex cluster. This particular setup hasn’t been upgraded since its inception, and we're now planning its first upgrade to NSX-T version 3.2. While I've done my best to understand the intricacies of both NSX-T and Hyperflex, I acknowledge I might be misinterpreting some aspects. Please correct me if that's the case. My specific concerns are: Upgrading Nodes the Right Way: NSX-T's approach seems to involve sequentially restarting nodes during an upgrade. However, given Hyperflex's architecture, each node requires a preparatory phase, typically managed via HX Connect and followed by a storage rebalance. How can we ensure NSX-T’s upgrade process respects these nuances? Cluster Health Checks Between Node Upgrades: After upgrading a single node in NSX-T, it's critical to ensure our Hyperflex cluster's health before moving on to the next node. Does NSX-T provide a way to pause between node upgrades or offer an opportunity for manual intervention? Maintaining Communication: Post-upgrade, the nodes need to maintain seamless communication with each other and align with NSX-T's central controls. What provisions does NSX-T have in place to ensure this? Given the above, we'd appreciate guidance on: Best practices for upgrading NSX-T within a Hyperflex environment, taking storage rebalancing into account. Potential challenges we might encounter due to the intertwined nature of NSX-T and Hyperflex. Tips or tools to monitor the health and connections of NSX-T and Hyperflex during the upgrade. Options within NSX-T to either manually intervene or synchronize with Hyperflex's health checks during the process. As we navigate this initial upgrade, your expert insights will be instrumental in ensuring a smooth and efficient transition.
Hi All, We have 2 Physical router and 2 Edge node which are connected with BGP session as per attached diagram. Router1 connected to Edge01 with 2 different BGP session and Router2 is connected to ... See more...
Hi All, We have 2 Physical router and 2 Edge node which are connected with BGP session as per attached diagram. Router1 connected to Edge01 with 2 different BGP session and Router2 is connected to Edge02 with 2 different BGP session for redundancy. We have observed that when we made one of Edge node node then all 4 BGP session goes down and it is showing in idle state. Still one Edge node is up. Appreciate your quick response.  
Before I open a support case - has anyone else have failure with the 4.1.1 nsx upgrade (from 3.2.3)? the first two edges encountered (parallel upgrade), a virtual (on ESX 7.0.3) and baremetal both f... See more...
Before I open a support case - has anyone else have failure with the 4.1.1 nsx upgrade (from 3.2.3)? the first two edges encountered (parallel upgrade), a virtual (on ESX 7.0.3) and baremetal both failed at the same spot - 70%. and when logging in to the appliances it would appear that both platforms have lost the fpeth nics/dataplane- though likely its a deeper issue. ifconfig doesnt list the dataplane nics on either device (on the baremetal they are Intel x710) tried the usual, reboot, resume etc. it looks like the OS upgrade went fine, mgmt nics still work fine   The log reports this (and pretty much same error for both the virtual/baremetal). Pnic status of the edge transport node 4868e5cc-e684-4e66-9ad9-790f981e80f4 is DOWN.,Overall status of the edge transport node 4868e5cc-e684-4e66-9ad9-790f981e80f4 is DOWN.,Edge node 4868e5cc-e684-4e66-9ad9-790f981e80f4 , has errors Errors = [{"moduleName":"upgrade-coordinator","errorCode":30201,"errorMessage":"Pnic status of the edge transport node 4868e5cc-e684-4e66-9ad9-790f981e80f4 is DOWN."}, {"moduleName":"upgrade-coordinator","errorCode":30212,"errorMessage":"Overall status of the edge transport node 4868e5cc-e684-4e66-9ad9-790f981e80f4 is DOWN."}, ] after state sync wait.   I initially missed uploading the latest Update Coordinator pub file, but doubt that would cause the above. (went back after fail and uploaded it). any insight would be appreciated. regards Wayne
NSX playbook provides detailed and step by step guide on specific use cases.  The purpose of the playbook  is to serve as a guide for day-to-day NSX operations and to facilitate the learning process... See more...
NSX playbook provides detailed and step by step guide on specific use cases.  The purpose of the playbook  is to serve as a guide for day-to-day NSX operations and to facilitate the learning process for NSX.
Hello Everyone, I came across an issue today with vRNI and NSX. vRNI could not delete Firewall ipfix profile the profile status is In Progress.  Has someone co
Hello Folks.  If anyone has already made use of vRNI to create waves of migration (communication affinity, who talks to whom),  If yes, what were the challenges and problems faced? I know that vRN... See more...
Hello Folks.  If anyone has already made use of vRNI to create waves of migration (communication affinity, who talks to whom),  If yes, what were the challenges and problems faced? I know that vRNI received flows from vDS with source, port source, destination and port destination to create DFW rules but he can help with wave migrations to establish affinity group communication?
Hello All, I did a recent upgrade of one of my lab NSX-T environment to 4.1.1.0. After the upgrade I am now getting odd error with my current Transport Node config using my current IP Pools that are... See more...
Hello All, I did a recent upgrade of one of my lab NSX-T environment to 4.1.1.0. After the upgrade I am now getting odd error with my current Transport Node config using my current IP Pools that are IPv4 only. Error: Static IP Pool ************ of type null should not be provided for IP assignment type STATIC_IP_POOL in Host Switch **************. Please provide IP pool of type IPV4. (Error code: 9884) When  I look at all of my IP Pools they are all showing a status error of "Pool identifier is null." I decide to build a new Test IP Pool from scratch and it goes right to the same error of "Pool identifier is null" Anyone have any ideas of what is going on here?