VMware Networking Community
5mall5nail5
Enthusiast
Enthusiast

How to "re-initialize" NSX-T 2.4.1 Manager host install status?

Hi all -

Been working with a dev environment that has both physical hosts and some nested ones I have been testing with.  I have had some issues with NSX-T not installing and so I've had to remove it from the hosts, etc.  However, it seems that during the "getting started" prompt or in the Fabric -> Nodes section it just says "NSX Installation Failed" and there is no way to clear it.  I even have some instances where I rebuilt some nested ESXi hosts (same FQDN, though) and they have NO NSX-T components on them at all, yet NSX-T manager has stale data stating "NSX Installation Failed".

Anyway to kick the NSX-T manager/appliance(s) over and have them re-assess the inventory and install states?

Thanks!

Tags (1)
Reply
0 Kudos
5 Replies
daphnissov
Immortal
Immortal

You should just be able to click the link on that node (can't remember the title) and it will re-poll the node and attempt the VIB push. One of the leading causes of VIB installation failure is due to lack of space in the bootbank. NSX-T pushes out A LOT of VIBs and needs quite a good bit of room on the ESXi host.

Reply
0 Kudos
jkensy
Enthusiast
Enthusiast

Thanks but that's def not the issue here - I have ~16-32GB on the ESXi installation disk and they're greenfield builds.  Something else got wonky, but no amount of clicking the URL in the status does anything.  It offers to "remediate" or "resolve" but nothing happens.  I've had to go and get the NSX manager thumbprint, then nsx cli on the host, detach, stop svcs, etc. and do "del nsx".  But, that only worked on hosts that were install failed.  Other ones show as NSX not configured/uninstall failed, yet there are no vibs on the host at all... because I rebuilt said host but NSX manager never like...  re-checked.  Or, if it does, it considers it the same host by name and nothing else and it's got stale stuff in its database.

Reply
0 Kudos
rvanaltena_rl
Contributor
Contributor

Hi,

I ran into a similair issue when i entered the wrong segment for the vmkernel migration.

I was able to remove the failed host using the api.

https://<NSX manager IP>/api/v1/transport-nodes/state

       {

            "transport_node_id": "58293063-8aea-4a86-a67a-e285fc058eba",

            "maintenance_mode_state": "DISABLED",

            "node_deployment_state": {

                "state": "failed",

                "details": [

https://<NSX manager IP>/api/v1/transport-nodes/58293063-8aea-4a86-a67a-e285fc058eba?unprepare_host=false

Reply
0 Kudos
Sreejesh_D
Virtuoso
Virtuoso

Does this problem visible only with nested hosts?

Reply
0 Kudos
rvanaltena_rl
Contributor
Contributor

Yes, in my case. I have seen similair issues on bare-metal hosts that could be resolved from the gui.
But i'm still not sure if the creator of this thread had the same issue.

Reply
0 Kudos