VMware Cloud Community
AngelC2
Enthusiast
Enthusiast
Jump to solution

Migration of Two Separate VMware Environments Best Practices

I'm needing assistance in figuring out what's the best approach to the migration of two separate VMware environments.  Here is the breakdown of the environments:

Old Environment:

  • vCenter and ESXi hosts are v6.0
  • 4 ESXi Nodes in a Cluster
  • 1 vCenter Virtual Appliance Server
  • Dell Compellent SAN connected to hosts via Fiber

New Environment:

  • vCenter and ESXi hosts are v6.0
  • 3 ESXi Nodes in a Cluster
  • 1 vCenter Virtual Appliance Server
  • Dell EMC VxRail vSAN connected to hosts via Fiber

I have two separate VMware environments, old is on a different VLAN than new on the same network.  Both SAN's cannot see each other, isolated on network. If re-configuring the network so that they are on the same VLAN/subnet, can see each other, etc. is not an option...I don't know for sure, just saying if it's not...what is the best approach for me to migrate my VM's and their data over to the new vCenter/Hosts/vSAN?

Any help would be greatly appreciated, thanks.

Angel

1 Solution

Accepted Solutions
AngelC2
Enthusiast
Enthusiast
Jump to solution

Yes, you are 100% correct and that's exactly what I had to do.  I took the information you gave me and presented it to our Network Engineer along with me showing him how our vCenter configurations looked like for both hosts, switches, etc. and from that he was able to determine after troubleshooting what was needed.

It turns out he had to create a route in our Cisco router (gateway) to route traffic to the vMotion network.  I then was able to configure the vMotion adapters and TCP/IP Stack correctly which allowed the vMotion and vStorage Motion to work.  So, overall it wasn't a L3 solution since our network is not configured this way and we didn't want it to route traffic between the two networks using our MX (Firewall).  I'm just trying to explain the best I can based on what I was told and shown, so overall I've definitely learned from this experience so in the end it's all good.

Thank you for all of your help, couldn't of done it without you.  Smiley Happy

View solution in original post

24 Replies
daphnissov
Immortal
Immortal
Jump to solution

This can go a lot of different ways depending on the complexities of the old and those of the new, but it sounds like what you want is a way to combine (consolidate) the old environment and absorb it into the new. If that's the case, this would essentially be the same as a migration (or lift-and-shift migration). However, there are lots of things to consider before committing to that path. I'd recommend you reading this blog I wrote that'll give you a better idea of things to check and validate before you decide a swing migration is the best approach for you.

Upgrading vSphere through migration

AngelC2
Enthusiast
Enthusiast
Jump to solution

daphnissov,

Thanks for your reply and input.  I've read your blog and wanted to let you know that this is definitely a migration.  I've been involved with VMware since version 3.5 and have done plenty of in-place upgrades successfully and I've replaced old hardware (ESXi Hosts) with new ones, but kept the same vCenter and SAN in place.  This is the first time I've actually had to migrate from an existing system to a new one, meaning new vCenter, Hosts, and vSAN.

I know things would be much easier if we just reconfigured our new VxRail to use the same subnet/VLAN as our existing VMware & SAN is using, so that I could just disconnect my hosts and reconnected to the new vCenter and then vMotion & vStorage Motion my VM's and data to that system, but I'm not yet sure if this is an option for me.

All I care about is getting the VM's and their data over to the new system.  We are not going to upgrade anything once that's completed since our VxRail solution is upgraded by our Dell EMC provider once they have passed their validation, so that means version 6.0 although not the latest 6.5 will stay as is for now.

Once all of my VM's and their data are on the new system up & running, I will be retiring the old ESXi host servers and vCenter.  Only one system will stay operational in our environment.  I need direction, best option for my scenario, etc. in regards to what I'll need to do besides the networking side of things if their is anything else to get this migration completed.

Thanks!

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

I know things would be much easier if we just reconfigured our new VxRail to use the same subnet/VLAN as our existing VMware & SAN is using, so that I could just disconnect my hosts and reconnected to the new vCenter and then vMotion & vStorage Motion my VM's and data to that system, but I'm not yet sure if this is an option for me.

If you're going from ESXi hosts with FCP-attached storage to VxRail (vSAN), then you could just disjoin the hosts from the current vCenter and join them to the VxRail vCenter, then perform a migration of the VMs. As long as the ESXi hosts have management network layer 3 connectivity the migration of the VMs will succeed. There's nothing from a storage perspective you should have to change with either environment.

When I've done customer migrations to VxRail, this is how I've generally done it. I only pointed out that blog so as to remind you that once you swing hosts or migrate VMs (doesn't matter), them being under a different vCenter for management will have implications when it comes to your existing backup, monitoring, and any other processes you have in place that require communication with vCenter and depend on the vCenter-provided IDs being consistent.

AngelC2
Enthusiast
Enthusiast
Jump to solution

Your comment "going from ESXi hosts with FCP-attached storage to VxRail (vSAN)" is a little unclear.  I'm guessing FCP-attached storage means Fiber correct?

If so, we currently have hosts connected to our Compellent SAN via Fiber and we have VxRail also connected via Fiber...both running on the same network, so you're saying that as long as the Layer 3 network connectivity is configured correctly I won't need anything else to connect them to the new vCenter?

I take it that even if the IP's are different on these hosts, as long as the network is configured to allow them to communicate with each other then I can do this, did I understand you correctly?

What about the two SANs not seeing each other, if I can disconnect a host and reconnect it to new vCenter, how would this work? I'm thinking that the SANs would need to see one another to allow vMotion between them?

I might be over thinking this, but just want to be sure I'm understanding you correctly...again, thanks for all your help.  As for all the other things pointed out in your blog, yeah I understand but on that end we have no worries at this time...thanks.

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

Your comment "going from ESXi hosts with FCP-attached storage to VxRail (vSAN)" is a little unclear.  I'm guessing FCP-attached storage means Fiber correct?

Yes, FCP = Fibre Channel Protocol

so you're saying that as long as the Layer 3 network connectivity is configured correctly I won't need anything else to connect them to the new vCenter?

I take it that even if the IP's are different on these hosts, as long as the network is configured to allow them to communicate with each other then I can do this, did I understand you correctly?

Yes, that's correct. Let me illustrate.

I'm going to move a VM from one FCP-attached datastore across a vCenter and cluster to a totally different array that is also connected via FCP.

pastedImage_4.png

The hosts in each cluster have a separate IP schema on separate subnets, but they have L3 connectivity between them.

I initiate a migration, selecting the option to move storage first, from the source side to the destination.

I check the network utilization of the source host's vmnic assigned the management vmkernel port and can clearly see ESXi is moving data over that IP network and not the FCP network.

pastedImage_5.png

pastedImage_8.png

If I check the destination host which has been selected as the target, I see the inverse happening as it receives the data stream, also over the vmnic assigned management functionality.

pastedImage_7.png

pastedImage_9.png

Further, if I check the events log for the destination host, I can see the migration is occurring via NFC (Network File Copy) mode but is writing out blocks to the backend datastore.

pastedImage_10.png

The operation completes and the VM has been moved to the destination vCenter, cluster, and datastore.

So as long as you have L3 connectivity between management vmkernel ports from source to destination host, the migrations should succeed. Be aware (although it should be obvious enough) that the storage and memory contents (if performed online) will traverse your production network.

EDIT:  I should state that the vMotion vmkernel ports for source and destination hosts are in the same broadcast domain, so memory contents traverse that network, but storage data still travels across L3 boundaries over management.

AngelC2
Enthusiast
Enthusiast
Jump to solution

Ok, sounds good.

Some things to make note of in case it makes a difference.  On my existing vCenter, within the Networking as I look at virtual switches, etc. I see that our Management Network and vMotion are both on the same subnet and there is no VLAN ID tags being used.  On the new vCenter via VxRail, it does use VLAN ID's and it's using a Distributed Switch instead of a standard switch like my existing one.

On our physical network switches, we've configured routes, etc. for these new VLAN's for VxRail and have VLAN tags configured on them, but for our existing Data network on our switches we also have a VLAN tag, yet not within our existing vCenter, so will any of these make a difference?

I'm hoping we can just configure our physical network to allow the level 3 communications between the two vCenters and not have to worry about much else.  What are your thoughts?

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

On my existing vCenter, within the Networking as I look at virtual switches, etc. I see that our Management Network and vMotion are both on the same subnet and there is no VLAN ID tags being used.  On the new vCenter via VxRail, it does use VLAN ID's and it's using a Distributed Switch instead of a standard switch like my existing one.

On our physical network switches, we've configured routes, etc. for these new VLAN's for VxRail and have VLAN tags configured on them, but for our existing Data network on our switches we also have a VLAN tag, yet not within our existing vCenter, so will any of these make a difference?

Shouldn't matter, although it's definitely not a good thing that you have management and vMotion interfaces on the same L2 segment. As long as you can vmkping from the vMotion vmkernel port of the source host to the same port on the destination, you should be good. ESXi doesn't care about VLAN IDs matching or like-for-like switch configurations, it only looks at connectivity, so as long as you have connectivity from source to destination (both management and vMotion), you are fine.

I'm hoping we can just configure our physical network to allow the level 3 communications between the two vCenters and not have to worry about much else.  What are your thoughts?

When you say "between the two vCenters" there's no need to make adjustments to the networking of vCenter Server itself, only the hosts participating in the migrations.

The other thing of which to be aware if you hope to migrate these VMs online is CPU compatibility. If you aren't using EVC, you'll need to ensure the VMs running on the source side are able to move while online to the destination, otherwise you'll have to do them cold.

Reply
0 Kudos
AngelC2
Enthusiast
Enthusiast
Jump to solution

Ok, got it...connectivity is the key between the hosts.  As for my comment about the vCenters...that's what I meant...really the hosts within them, not so much the networking configurations for vCenter itself...sorry about that confusion.

I'm glad you brought up EVC, because that was my next question.  Based on past experience, at one point I had a mix of HP servers for hosts and I did end up using EVC in my cluster.  I then upgraded my host servers and since they were all identical, I didn't have to set the EVC mode for the cluster.  So, my current vCenter does not have EVC mode configured for the cluster.

The new vCenter on VxRail does have EVC mode configured although I'm not sure why when all 3 nodes are identical as well.  My goal is to do online vMotion/vStorage Motion of the VM's over to VxRail, so I figured I would have to do something about EVC.

I'm planning on using a test migrate VM to figure this out prior to moving production VM's.

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

There's no guarantee that you must use EVC on the source side. If the CPUs are sufficiently old enough (or similar), then the VMs may be migrated without any additional configuration. As you said, you really need to do a test before you determine what, if anything, you have to change.

Reply
0 Kudos
AngelC2
Enthusiast
Enthusiast
Jump to solution

Ok, thanks for all of your help!  Smiley Happy

Reply
0 Kudos
AngelC2
Enthusiast
Enthusiast
Jump to solution

In regards to my testing, here is what happened:

I was successful at connecting my existing host server to the VxRail, but due to EVC mode, I had to add it to the datacenter instead of the cluster as standalone host.  As I attempted to do my first migration it failed because of vMotion not communicating properly.

I used Putty to do SSH sessions and connected to both hosts and confirmed this:

My VxRail host was able to ping both the IP of the standalone host and vMotion IP, but my standalone host can only ping the IP of the VxRail host I'm trying to migrate to but not the vMotion IP.  Any ideas on what I would need to do?

I've uploaded the image of the error that I'm getting for reference.

pastedImage_5.png

Thanks,

Angel

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

On your source host you may need to configure the vMotion TCP/IP stack and reassign your vmkernel port there. Optionally, depending on your network design, you may need to add static routes that allows vMotion traffic from source to reach destination.

Reply
0 Kudos
AngelC2
Enthusiast
Enthusiast
Jump to solution

Ok, I sort of understand, but when you say configure vMotion TCP/IP stack and reassign your vmkernel port there...what exactly does that mean?

I'm not too experienced on the networking side of things, so do you mean I would need to add the vMotion port/IP from the destination host?

Reply
0 Kudos
AngelC2
Enthusiast
Enthusiast
Jump to solution

Thanks, although the document is pretty straight forward, it's still not working.  By using the option of selecting an existing network, it takes on the same gateway, dns, etc. as the current TCP/IP stack for that switch, so it still doesn't communicate with the vMotion of the destination host which is on a different gateway, etc.

When I tried creating a new switch, I don't have any available NIC's, so I can't configure it that way.  I do have other available NIC's on the source host, but it doesn't see that as an available adapter because it's configured for the other stacks.

I believe I'm closer, but still missing something.  I have to figure out how to remove one of these NIC's from it's current configuration to use it for a new switch? I'm thinking that's the only way to give the VMkernel adapter for vMotion the TCP/IP settings that match the destination host...not sure, but that's what I'm thinking.

AngelC2
Enthusiast
Enthusiast
Jump to solution

Update:

I managed to figure out how to remove a NIC, so that it can be used by a new switch but when I tried to do that, it still did not allow me to select the gateway and dns settings, so this isn't working for me.

Again, documentation may be correct but it's either me missing something not obvious to me, not understanding it fully, or it's not working for my scenario.  Do you think you can give me more step-by-step instructions?  Here are some screenshots to assist you in what my environment looks like:

This is Virtual Switches on Source Host:

VirtualSwitchesOnSourceHost.png

This is VMkernel Adapters on Source Host:

VMkernelAdaptersOnSourceHost.png

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

I need to better understand your network topology and how you've configured source and destination ESXi hosts. Can you show those VxRail hosts, the switches, and vmkernel ports? Can you give me a layout of the networks involved here?

Reply
0 Kudos
AngelC2
Enthusiast
Enthusiast
Jump to solution

Yes, I'll show you the VxRail destination host since I've already shown you the source host.  Remember, this is an out-of-the-box VxRail installation done by Dell EMC, so the way the switches, etc. are configured is their doing, not mine.  As for the layout of the networks involved.  What you're seeing in these screenshots is that our existing VMware environment is on our Data VLAN which uses 10 as the third octet.

The new VxRail uses multiple VLAN's, with vSAN & vMotion being the two new ones we created apart from what was already existing in our network.  vSAN uses 50 and vMotion 60 in the third octet. Gateway for source host is 10.10.10.1 and for VxRail it's 10.10.1.1. Management VLAN is 1 and Data VLAN is 10, this VLAN is where the VM's live.

VxRailDistributedSwitch.png

VxRailNetworks1.png

VxRailNetworks2.png

VxRailAdapters.png

VxRailTCPIPStack.png

VxRailTCPIPStack2.png

VxRailTCPIPStack3.png

That's it, hope this helps  Smiley Happy

Reply
0 Kudos
daphnissov
Immortal
Immortal
Jump to solution

Do this from each ESXi host (source and destination) and paste the output:

From source:

traceroute -i vmk1 10.10.60.12

From destination:

traceroute -i vmk4 10.10.8.21

Reply
0 Kudos