VMware Cloud Community
aj800
Enthusiast
Enthusiast

Moving VCSA and hosts to new network. What the best procedure or strategy?

I'm looking for the best strategy for moving our vCenter Appliance to a new management subnet (subnet B), and that VCSA is currently on the same subnet as the hosts in the cluster (subnet A).  Our setup is below.

Environment (1 cluster):

3 ESXi 6.5 hosts on subnet A

1 VCSA 6.5 on subnet A

Management (subnet B) is firewalled/routed from subnet A

Questions:

How do I change the IP/Hostname of the VCSA to one in subnet B, and still manage the hosts in subnet A?  What ports are required for a firewall for the VCSA to manage them, and if I want to change the hosts to subnet B, will I need to remove them from the cluster, change their IPs/hostnames and then add them back?

Reply
0 Kudos
6 Replies
aykutarar
Contributor
Contributor

Hi

Can you check this VMware Knowledge Base and YouTube

Reply
0 Kudos
aj800
Enthusiast
Enthusiast

Neither of those links are really what I'm looking to do.

To reiterate, we have an entire management subnet where we have enterprise management hosts and devices set aside from our production traffic networks.  Our ESXi hosts in both our DEV and PROD environments are currently in their own subnet apart from their production traffic networks, but we'd like to move the VCSA and the ESXi hosts to that enterprise management network, which will require a different domain name (Example: dev_vcsa1.network1.mycompany.com/192.168.1.201 ==> dev_vcsa1.mgmt.mycompany.com/192.168.100.201).

My thought was to set up firewall rules that permit the VCSA to manage the ESXi hosts between the different subnets (during transition).  Once that's done and working as expected, I would then one-by-one move each ESXi's vmkernel management intreface to that management subnet.  This should not impact production traffic, I would imagine, since it's not moving those NICs.

I'm trying to figure out 1) if it's better to move the hosts first, then the VCSA, or, the VCSA first, then the hosts... and 2) how to do each migration without breaking anything or impacting production traffic, and since we have a Distributed Switch in use with this vCenter cluster.

Reply
0 Kudos
scott28tt
VMware Employee
VMware Employee

Here is the port info from another thread today: Re: Port requirement between vcenter & Esxi


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
Reply
0 Kudos
ZibiM
Enthusiast
Enthusiast

Changing the IP of the VCSA is possible in the 6.5, but changing the FQDN of the vCenter has been supported since 6.7 u3 only.

VMware vSphere 6.7 Update 3 released • Nolabnoparty

Have you considered doing the upgrade of the VCSA ?

This is bit like installing new one and importing all the data from old one.

I'd do this change like this:

1. Prepare the IPs in the new network - reserve IPs, register new DNS names

2. Add this Management VLAN to the ESXi hosts uplinks

3. Prepare the firewall rules for the vCenter from the subnet B to be able to reach both old vCenter with 443, and the ESXi servers (443, 902 TCP, and 902 UDP from ESXi servers to new vCenter)

4. Perform the upgrade of the vCenter - do the backup and snapshot of the old one, initiate the upgrade -> for this you need access through FW to both VCs at the same time

5. New vcenter will be deployed, the data will be migrated, and the old one will be shutdown

6. Now you can move to the ESXi hosts - change DRS to partially automated and one by one do the following:

a) disconnect esxi server from new vcenter

b) using the server OOB (ILO, iDRAC, iMM, XClarity, etc) go to the dcui and login

c) change VLAN, change Mgmt IP, change FQDN and restart management network

d) in the vcenter connect the esxi server using new name

e) repeat with other hosts

f) in the DRS enable full auto

If there is VDS this will be bit trickier:

a) you can add new vmkernel in the management network,

b) configure it using new IP,

c) disconnect the ESXi from vCenter

d) change the FQDN in the DCUI or the ESXi UI

e) in the vcenter connect ESXi using new name

f) repeat with other hosts

g) enable DRS full auto

This is just a rough sketch - please dry run this and think how it can work in your environment

Use FQDN wherever you can

Ensure DNS resolve (especially PTR for the vCenter) and NTP reachability from both networks

Good luck and have fun 🙂

Reply
0 Kudos
ZibiM
Enthusiast
Enthusiast

Actually you may want to wait for a while:

Unable to rename vCenter 6.7

Reply
0 Kudos
aj800
Enthusiast
Enthusiast

This is pretty informative, thank you.  The other critical issue I forgot to mention is the SSL certificate.  We're using our own CA that issues our certs and the one for the current VCSA is tied to that FQDN.  I wonder how moving the VCSA - even if successfully done with the method you suggest - would affect access to vCenter in a browser with the new FQDN.  I've had nightmares working with vCenter certificates and broke access several times which was a pain to fix, even working with support.  Any recommendations on that part?

Also, It dawned on me that we might not have to move the hosts over anyway, just the VCSA.  It was our plan a while back, but that part of it might be OBE now since we're getting rid of these.  They're going to be decommissioned since they're older EOL hardware, and they're being replaced by new hosts we already have that will get the 6.7 ESXi hypervisor installed on each... so as long as we can manage the old hosts where they are from the new subnet, we should be fine.

For reference, we did something like this before in a different environment, but over there, everything was already on the correct and same network:

In that setup, I upgraded the VCSA to 6.7 running on the old 6.5 host cluster. Then we installed ESXi 6.7 on the new hosts we got, then created a new host cluster in vSphere and configured settings to match the old one.  We had some lingering but minor network config issues with those (Vlan and MTU alerts), so I created new port groups on the distributed switch to match each existing port group and their Vlans to mitigate that in the new cluster.  The new hosts were cabled up and connected to the physical switches (we moved from blades to full servers), and then joined to the new host cluster in vSphere where I paired the uplinks from the new hosts with the new port groups on the vDS, and vMotion was IP'd on the same subnet as the old.  A new storage system was also configured & attached to the new hosts.

So, in summary, we had the original vCenter (upgraded), the original host cluster, the original vDS, and then the additional host cluster with new (matching) port groups for the new 6.7 hosts, with new storage.  For decommissioning of the old hosts, and VM migration, we powered off each VM, swapped the virtual network adapter(s) with the new port group adapter(s) (matches vlans) for each VM, and did a vMotion (storage and compute) over to the new host cluster, then powered them up.  Once the old hosts were cleared, we powered them down and removed the old cluster and port groups since they were no longer needed.  It went rather smoothly.

Reply
0 Kudos