VMware Networking Community
evil242
Enthusiast
Enthusiast
Jump to solution

Redeploying NSX-T Manager nodes to new VLAN/subnet

NSX-T 3.1

I was able to redeploy most of the managers to the new desired VLAN/subnet except for the original deployed NSX-T Manager node (ova).  

So to replace the original, do I

a) Delete / detach original first NSX-T Manager Node from the cluster and deploy new manager node to replace and be done?  https://docs.vmware.com/en/VMware-Validated-Design/6.0.1/sddc-backup-and-restore/GUID-5B9F19F5-98BC-...

b) Backup, delete/detach, deploy new, and then restore?

The whole "Restore an NSX-T Manager Node to an Existing NSX-T Manager Cluster" appears to provide good guidelines, but not definitive if you are updating the subnet and re-IP'ing everything.

https://docs.vmware.com/en/VMware-Validated-Design/6.0.1/sddc-backup-and-restore/GUID-F7FD64A3-6594-...

 

Damion Terrell  .   +  (He/Him)  +  . *  .  +   @   + .    *  .    +      .                    
Core IT Service Specialist * . + * . + . + . + * +
UNM – IT Platforms – VIS + . . . . . . . . .
. + . + * . + * .
* . . + . . . . + . + * + .
“You learn the job of the person above you, * + . + * @
and you teach your job to the person below you..” . * +
Labels (3)
Reply
0 Kudos
1 Solution

Accepted Solutions
evil242
Enthusiast
Enthusiast
Jump to solution

Hello,

So yes, you can deploy extra nodes.  But first, if you have Virtual IP set from System > Appliances > NSX Manager, you must remove the Virtual IP.  That setting requires that you can only deploy in the same range.

Once the dependency was removed, I deployed extra nodes into the new IP range. 

Next I began "deleting" the old nodes in the old IP range that the NSX Manager interface would allow. This is basically the same as in the link you provided (although, I didn't have the document at the time).

For the remaining NSX Manager that was the original OVA deployed, I followed the "Procedure" section that describes ssh'ing in to one of the new NSX Manager kubernetes interface as admin, then

> get cluster status

copy the UUID of the first, original OVA deployed node, and

> detach node <node-UUID>

Then, in vCenter, shutdown and/or delete VM from disk.

Wait a few minutes and then refresh the Appliance list and the original OVA deployed node should be gone from the list.

Last under System > User Management > VMWare Identity Manager, I was able to connect to vIDM and specify an external LB VIP. 

Damion Terrell  .   +  (He/Him)  +  . *  .  +   @   + .    *  .    +      .                    
Core IT Service Specialist * . + * . + . + . + * +
UNM – IT Platforms – VIS + . . . . . . . . .
. + . + * . + * .
* . . + . . . . + . + * + .
“You learn the job of the person above you, * + . + * @
and you teach your job to the person below you..” . * +

View solution in original post

Reply
0 Kudos
4 Replies
JohannesWalter
Contributor
Contributor
Jump to solution

What problem do you fear? IMHO the three nodes of a manager cluster are equivalent, the node deployed first (by OVA) has no special features. Therefore, I would take the same approach as for the other two nodes. After the third node is added back to the cluster, it will synchronize again.
If VCF is used, however, I'm not sure if there are any other dependencies....

imatacic
Enthusiast
Enthusiast
Jump to solution

I need to make the same process of changing IP addresses of NSX-T managers. Reading the documentation, I found this article: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/installation/GUID-10CF4689-F6CD-4007-A33E-A9... which says:

The normal production operating state is a three-node cluster of the NSX Manager (Local Manager in an NSX Federation environment) or Global Manager. However, you can add additional, temporary nodes to allow for IP address changes.

Seems like three nodes is not maximum in cluster. Then I find this article which describe your problem: https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-91248B32-856A-4254-A337-... Scenarios A and B seems reasonable and with lower risk. I'll try with scenario B because I need to move Managers to new cluster.

--
Please KUDO if you find this post useful
evil242
Enthusiast
Enthusiast
Jump to solution

Hello,

So yes, you can deploy extra nodes.  But first, if you have Virtual IP set from System > Appliances > NSX Manager, you must remove the Virtual IP.  That setting requires that you can only deploy in the same range.

Once the dependency was removed, I deployed extra nodes into the new IP range. 

Next I began "deleting" the old nodes in the old IP range that the NSX Manager interface would allow. This is basically the same as in the link you provided (although, I didn't have the document at the time).

For the remaining NSX Manager that was the original OVA deployed, I followed the "Procedure" section that describes ssh'ing in to one of the new NSX Manager kubernetes interface as admin, then

> get cluster status

copy the UUID of the first, original OVA deployed node, and

> detach node <node-UUID>

Then, in vCenter, shutdown and/or delete VM from disk.

Wait a few minutes and then refresh the Appliance list and the original OVA deployed node should be gone from the list.

Last under System > User Management > VMWare Identity Manager, I was able to connect to vIDM and specify an external LB VIP. 

Damion Terrell  .   +  (He/Him)  +  . *  .  +   @   + .    *  .    +      .                    
Core IT Service Specialist * . + * . + . + . + * +
UNM – IT Platforms – VIS + . . . . . . . . .
. + . + * . + * .
* . . + . . . . + . + * + .
“You learn the job of the person above you, * + . + * @
and you teach your job to the person below you..” . * +
Reply
0 Kudos
nogip83456
Contributor
Contributor
Jump to solution

As individuals become more mybkexperience health-conscious and environmentally aware, there has been a growing demand for alternative food options.
Meatless burgers, made entirely from plant-based ingredients, have gained immense popularity.

Reply
0 Kudos