VMware Cloud Community
ccvme
Enthusiast
Enthusiast

Migrating to new hardware without shared storage

Been reading a lot about this and trying to get my plan together. Read an older post which I think answers my question but wanted to get some confirmation.

Old environment: 4 ESXi hosts running ESXi 6.5u3, vCenter vCSA 6.7u3n. All hosts connected to shared HP fiber MSA. Old hardware isn't supported on newer vSphere.

New: 3 hosts will be running ESXi 7.x with vCSA 7.x using totally separate storage (HP MSA iSCSI but on the same physical network as the existing hosts.

Both sets of hosts are running Intel CPUs but obviously much different versions of Xeon processors.

Would I add the new hosts into a data center cluster on the existing vCSA (once it's updated to 7.x) or would I add an older host to the new data center cluster of the new environment.  Do I even need to do any of this or can I somehow just migrate these VMs cold across the network to the other data center/cluster?

What would be the most efficient way to do this. VM downtime isn't that big of a deal but would need to be done over the weekend / after hours. Can shut down one of the old hosts if needed as well, have plenty of resources to run the VMs with one host shut down. Could I do replication of some kind? Looking for any options that doesn't involve backup and restore.

 

 

Reply
0 Kudos
19 Replies
fabio1975
Commander
Commander

Ciao 

I would add the new nodes to the current vCenter (once updated to version 7.x) and then perform a cold migration.

Maybe this link can help you http://blametheitguy.com/vmware-esxi-cluster-6-5-6-7-to-7-0-migration-and-new-hardware/

Otherwise, if you have Veeam backup & replication as a backup I would use Veeam replication to have less downtime.

Fabio

Visit vmvirtual.blog
If you're satisfied give me a kudos

Reply
0 Kudos
ccvme
Enthusiast
Enthusiast

Veeam Replication may be what we need. We could a one year license or even use the trial to test out the solution.

So if I add the new nodes to the existing vCenter, at some point I'm going to need to move them to a new vCSA. Is it just a matter of disconnecting the host from one vCSA and attaching it to another or do I need to move all the VMs, put into maintenance mode and then move the host to the new vCSA?

 

Reply
0 Kudos
fabio1975
Commander
Commander

Ciao 

If you use Veeam I recommend you have two distinct infrastructures. The old infrastructure with the 6.x nodes and the respective vCenter and the new infrastructure with the new nodes in 7.x and their vCenter. You can replicate it from the old vCenter to the new one.

Check the Veeam compatibility matrices.

Fabio

Visit vmvirtual.blog
If you're satisfied give me a kudos

Reply
0 Kudos
niyijr
Enthusiast
Enthusiast

Hi @ccvme

Can you connect the HP fiber MSA to the new hosts?

If yes, you can;

  1. Mount the old datastore(s) to the new hosts
  2. Power down the VMs during your maintenance window
  3. Remove the VMs from the inventory on the old vCenter
  4. Register the VMs on the new cluster
  5. Then do a storage vMotion from the old datastore to the new one.

The Storage vMotion will take some time depending on the size of the VMs but during this phase, the VM will remain powered on. It may experience some degradation depending on your network so try to do one VM at a time.

________________________________________________________
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
a_p_
Leadership
Leadership

Upgrading from v6.7 U3n to 7.0 is currently not supported (see https://kb.vmware.com/s/article/67077)

André

Reply
0 Kudos
e_espinel
Virtuoso
Virtuoso

Hello.
According to the interoperability matrix with a vcenter Server 7 we can manage ESXi host in version 6.5 U3.

e_espinel_0-1622474889238.png

 

Link: https://interopmatrix.vmware.com/#/Interoperability?isHideGenSupported=true&isHideTechSupported=true...

If we do not need the statistics of the old vcenter server we could try to migrate as follows:

The ESXi hosts should not have Cluster configurations, at the end of the migration you can reconfigure Cluster on the new ESXi hosts if needed.

Disconnect the first old ESXi from vcenter server in version 6.7, the VMs will continue to run on the ESXi host.
Then connect the old ESXi to the new vcenter server version 7.
Verify that all network settings (internal and external) on the old ESXi host are the same as the network connection on the new ESXi host in version 7, this is needed to be able to migrate the VMs.
Verify that the vMotion configuration is working and have the same network segment on both the old and new ESXi hosts.
The new ESXi Hostg must have Datastore configured on the new Storage.
As the Hardware (CPU, and so on) is different the VMs should be cold migrated (powered off).

An important detail is to verify that the VMware vSphere version 6 and version 7 licenses have the vmotion functions included.

If there are such resources, you should test with an ESXi host that contains non-critical VMs and then if everything goes smoothly continue with the rest of the ESXi Hosts.

 

Enrique Espinel
Senior Technical Support on IBM, Lenovo, Veeam Backup and VMware vSphere.
VSP-SV, VTSP-SV, VTSP-HCI, VTSP
Please mark my comment as Correct Answer or assign Kudos if my answer was helpful to you, Thank you.
Пожалуйста, отметьте мой комментарий как Правильный ответ или поставьте Кудо, если мой ответ был вам полезен, Спасибо.
IRIX201110141
Champion
Champion

You should use one of your 3 new hosts as a transfer host. If you need EVC or not depends on a lot of thing... its often possible to vMotion VMs from older Hosts to newer ones without the help of EVC. But since Spectre and Meltdown is there ... the game changed a lot.

  1. Old Cluster (EVC)
    1. Host Old_A
    2. Host Old_B
    3. Host Old_C
  2. Transfer Cluster(EVC as Old Cluster)
    1. Host New_A
  3. New Cluster (EVC Latest)
    1. Host New_B
    2. Host New_C

You now can use svMotion to move your VMs from old Cluster/SAN to the Transfer Cluster/New SAN and have no downtime for the long period of moving the VMs. If you finished the storage vMotion a short VM Shutdown and migration to the new Cluster is needed to get rid of the old EVC Settings.

 

Regards,
Joerg

Reply
0 Kudos
continuum
Immortal
Immortal

Here is a brute force approach for this task that is not very convenient but very very reliable ....

You need one Linux VM with ssh access to the source ESXi and to the target ESXi.
Inside the Linux you mount the source datastore via sshfs in readonly mode.
Next you mount the target datastore in writeable mode.
Then you copy over the flat.vmdks, delta.vmdks or sesparse.vmdks with ddrescue.

All other files can be copied with simple cp commands.
This option is not the fastest one as all copies go through sshfs twice - but this approach is restartable = in case network dies along the way and in my experience this is the only approach where I would bet on a 100% success.

 

Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

niyijr
Enthusiast
Enthusiast

@continuum Interesting approach 👍

________________________________________________________
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
continuum
Immortal
Immortal

For a one-off task this procedure is probably way to complicated and inconvenient.
To see the advantages you probably need to practice it a couple of times.

In my daily work (recovery of VMs and VMDKs from damaged VMFS-volumes) I use this approach since about 10 years.
I prepared a special Linux LiveCD for this job so setting up the Linux system is done in minutes.
I especially love the predicatability.and reliabilty.

Once I booted from my livecd I run 4 commands. If they succeed I can promise my customer that the migration will work. 100%

I can also be sure that the VM on the source datastore will never be messed up.
If the network between source and target host dies I can always resume a copy in progress.
Even vmdks with I/O errors on the source can be migrated ....
Best of all: the minimal requirements:
ESXi version can be anything between esxi4 and esxi7 - all you need is ssh-access.

And just as a sidenote: I always enjoy to answer questions like: what is the most reliable and predictable way to do a typical vsphere-admin task with the claim: first of all - dont use ESXi-buildin tools.
Of course this procedure is completely unsupported 😎

Ulli


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

Reply
0 Kudos
ccvme
Enthusiast
Enthusiast

What I ended up doing is joining a test host (outside of my normal production cluster) to a new cluster inside my existing vCenter, then just did a vMotion from there. It's not blazing fast (1Gb network) and doesn't leave a copy on the old hardware which is would prefer but it does get the job done. Since I can power these off during migration I may just clone them to the new hardware so I have a copy on the old just in case. 

Reply
0 Kudos
ccvme
Enthusiast
Enthusiast

You should use one of your 3 new hosts as a transfer host. If you need EVC or not depends on a lot of thing... its often possible to vMotion VMs from older Hosts to newer ones without the help of EVC. But since Spectre and Meltdown is there ... the game changed a lot.

 

I need to read more about EVC because I really don't know much about it.

I think I did something similar to what you were describing but without EVC, pretty sure it wasn't enabled. Assuming we're doing all of this cold (VMs are off) does EVC matter?

 

Reply
0 Kudos
ccvme
Enthusiast
Enthusiast

Can you connect the HP fiber MSA to the new hosts?

 

Unfortunately different storage and different storage types. Going from an old fiber array to a new higher speed iSCSI array. No sharing of storage because the new hosts don't have a way to connect to the old storage. Not sure if I could somehow connect the existing hosts via an iSCSI adapter to the new storage and if that would actually be any different that just doing the vMotion. Seems like it would be the same as far as speed goes.

Reply
0 Kudos
IRIX201110141
Champion
Champion

You can always do a cold migration when the VM is powered off. Than EVC doesnt matter.

EVC is a way to set all CPU to the same level so that a vMotion is  possible.

Regards,
Joerg

Reply
0 Kudos
niyijr
Enthusiast
Enthusiast

Great, glad it's sorted.
________________________________________________________
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
a_p_
Leadership
Leadership

There's no need for shared storage. You can live migrate VMs between hosts even without shared storage (Option: "Change both compute resource and storage"). This feature has been available since vSphere 5.5 U2 for Essentials Plus, or higher licensed environments. However, this required that the hosts are managed by the same vCenter Server (please see my previous reply).

André

Bruticusmaximus
Enthusiast
Enthusiast

How many VMs do you need to move?  A down and dirty way to do it is to use the Vmware Converter P2V tool.  The one benefit of this is that you still have the original VM powered off in the old environment if something goes drastically wrong.  If you need to move 100 VMs, this might not be the best solution.

Reply
0 Kudos
ccvme
Enthusiast
Enthusiast

Once we have the new hardware I'll be adding one of the old hosts to the new vCenter and doing the migration by cloning the servers while they are offline so I'll have the copies on the old hardware just in case something goes wrong.  I've done a few tests and it's pretty fast even on a 1Gb network.

Reply
0 Kudos
ccvme
Enthusiast
Enthusiast

How many VMs do you need to move?

 

About 50. Just going to use the clone option to move between storage/clusters.  I've tested it and it works fine. I'll need to make sure my networking matches the old switches but it shouldn't be hard to resolve any of those issues.

Reply
0 Kudos