VMware Cloud Community
qsnow
Contributor
Contributor

Changing the IP structure of ESX, VCenter, v1000

I just finished attempting this, which failed to the point of me rebuilding the entire environment. Due to this failure, I'm seeking to find out if this is possible at this point Smiley Happy

I was building an environment that consisted of 3 ESX hosts, VCenter, Nexus v1000, Nexus 5k, NetApp (10GB CNAs in everything).

After built, this environment will be transported to it's destination and setup there (an existing network). I couldn't maintain network connectivity AND use the existing IP addressing of where it will reside, so I create a new VLAN and IP structure for my initial build.

Anyway, with the way the Nexus 1000v integrates into VCenter (in my case VC is a VM), when I began the process of trying to change IP addressing it quickly because a nasty case of the chicken or the egg, if you will. If I made any changes to the SRM with the new addressing, it would loose connectivity to the ESX servers and/or VCenter. Once that connection was lost, it had no way to register port-group changes and such to VCenter to push back down to the ESX host SEMs. I tried to add a new vlan and switch over to the new vlan/trunk on the port-groups, it didn't seem help. I also tried creating another service console on a regular vswitch, but for some reason that seemed to confuse things as well after I started making 1000v changes.

You also can't make certain changes on the SRM Port-Groups (like vlan changes) if the port-group is being used.

Anyone have thoughts on what this process would / might look like? Only other thing I could have thought of would have been to migrate AWAY from the v1000, change all, then add it back in... but I was trying to avoid that.

0 Kudos
6 Replies
chelsen
Contributor
Contributor

Hi,

Could you elaborate on what you are trying to accomplish.

Building an ESX cluster with an IP range and trying to move it into production while changing the IP range is always a pain. No matter if you use vSwitches or the Cisco Nexus 1000V. You always have to re-assemble cluster that way and need to be careful with regards to the order or steps.

It can be done although I wouldn't recommend it.

Sorry but I didn't understand how Site Recovery Manager (SRM) fits into your desired picture. Could you please explain this as well.

With SRM you need a vCenter instance at your primary site as well as another vCenter instance at the recovery site. Thus you also need two VSM instances of the Cisco Nexus 1000V. One at each site.

Hope that helps.

Chris

0 Kudos
qsnow
Contributor
Contributor

Terribly sorry... It's been one hell of a week... I was not meaning SRM.... But VSM and VEM -- the nexus 1000v components.

As for what my end goal was -- it was to take the environment that was configured and convert the IP addressing structure to the destinations LIVE ip range. With the integration of the 1000v switching and how it ties to VC and is configured via VSM -- I couldn't start switching IP addresses because it would loose connectivity.

0 Kudos
chelsen
Contributor
Contributor

Hey, no problem for the confusion. These kind of weeks sometimes happen to all of us. Smiley Happy

It is possible to change the management IP addresses of an ESX cluster, even while it is managed by the N1KV. But I highly recommend against doing something like this in production as there are just too many things that can go wrong and where you would shoot yourself in the foot.

The N1KV is pretty robust and the VEM will even continue working in it's current state, while the VSM is gone. And if the vCenter is down at the same moment, ther won't be any configuration updates on which the VSM has to react. So you could change the IP address of the vCenter, the the one of the VSM and reconnect the two again. It becomes more tricky with the ESX/ESXi hosts, as you cannot change the IP address of the mgmt interface to which vCenter is supposed to be talking. So you have to disconnect the host from vCenter, change the IP address and connect it again with the new address.

But honestly: All this trouble is usually not worth it. Better "disconnect" everything, change the IP addresses and re-connect. Saves you a lot of headaches.

Hope that helps.

Chris

0 Kudos
qsnow
Contributor
Contributor

Aye. It does help.... In my case it hadn't quite made it to production yet, so I just rebuilt the whole thing.

You first response (base on my wrong acronym) did prompt another thought... I will be installing SRM in the next couple of weeks and replicating some VMs to another location. The backup site does not have the Nexus addon... Any idea what SRM will do? Would it just be a matter of me creating a vSwitch with the correct subnet and remapping the NICs in those VMs in case of a failover?

Is SRM nexus 1k aware?

0 Kudos
chelsen
Contributor
Contributor

To answer your last question first: The Cisco Nexus 1000V works perfectly fine with Site Recovery Manager (SRM) and vice versa.

Keep in mind that the way SRM works is that you'll have two independent vCenters. And what SRM basically does for you is give you a push-one-button approach for automatically re-registering VMs from a "mirrored" storage LUN into the recovery site vCenter after a failure. Everything that SRM does you could do by hand though. It's just much more convenient.

Therefore your network config in the recovery site is independent of the config in the primary site. You can create the same set of VLANs and port groups and do a 1:1 mapping. But you could also map port groups "Prod 1", "Prod 2" and "Prod 3" in the primary site to a port group "Recovery Main" in the SRM site. Whether these port groups reside on a Cisco Nexus 1000V or vSwitch doesn't matter. As such you can even mix and match using the Cisco Nexus 1000V in the primary site and the vSwitch in the recovery site.

SRM gives you tons of flexibility and the Cisco Nexus 1000V plays along with that very nicely.

But what you'll need to take care of from a higher level networking perspective is the Layer 2 interconnect between your two site. Your recovered VMs will expect to be on the same subnet as they were before the failure. Somehow you'll need to address this:

One way is changing the IP addresses and DNS entries of the recovered hosts (doesn't work very well for some servers, e.g. domain controlers).

Another option is to use a Cisco Data Center Interconnect (DCI) solution such as Overlay Transport Virtualization(OTV).

And yet another option is to move the entire subnet along with the recovered VMs to the secondary site and announce the prefix(es) from there. That can e.g. be done via HSRP.

Hope that helps and good luck with the SRM setup.

0 Kudos
qsnow
Contributor
Contributor

yes.. Again, helpful. As for the IP side of the failover - in the case we have to FAIL, I will be assuming the primary site will be unavailable and considered down hard for an extended period of time (as per our guidelines for determine what a failure means). In this case, I would bring the entire subnet to the secondary site. Create the vlan on my 6513 and trunk it into the current ESX trunks. That side of it should be good.

Appreciate the info on the SRM - I've tested it briefly as a proof of concept (sold it to the boss as "you can recover the systems with it if Im not available")... Suspect < hour recovery with SRM... Much longer without in manually mounting luns and such.

0 Kudos