VMware Cloud Community
Stefano67
Contributor
Contributor

VM migration from 3.5 cluster to 4.1

Hi all,

I need to move all my VMs from my old ESX 3.5 cluster to the new one made by ESXi 4.1 hosts.

What are the possible solutions ? Which one is the fastest ? And which one the most recommended ?

Currently the two clusters do not share any datastore, would this maybe be the best solution ?

Thanks for any hint.

  Stefano

0 Kudos
11 Replies
schepp
Leadership
Leadership

Hi,

if they have a shared datastore, you would be able to migrate the VMs with vMotion, without any downtime or impact on the users.

Without a shared datastore, you'll need to power down the VMs and migrate them with vCenter , converter or scp.

Regards

0 Kudos
Stefano67
Contributor
Contributor

Hi,

actually the two clusters are on two different Virtualcenter, is it possible/good practice to share a datastore

between two VCs (of course not for running VMs, but only for moving them) ?

Currently I'm trying with exporting appliance - deploy OVF template, but it's terribly slow

and some times I get errors.

0 Kudos
idle-jam
Immortal
Immortal

what's the bandwidth between both site if it's local LAN deploying OVF should not fail ..

0 Kudos
mittim12
Immortal
Immortal

I don't think it's a big deal to share a datastore for this purpose.  The biggest concern is having the LUN attached to so many host at a given time.   If you simply used the datastore for the transfer of the machines  then you should be fine.  

0 Kudos
Stefano67
Contributor
Contributor

> what's the bandwidth between both site if it's local LAN deploying OVF should not fail ..

At least 100 Mb/s, in some cases 1 Gb/s.

> If you simply used the datastore for the transfer of the machines  then you should be fine.

Ok, I'll try in this way.

Thanks,

  Stefano

0 Kudos
Stefano67
Contributor
Contributor

Quite strange behaviour:  if I deploy OVF template reading it from a network mapped drive (windows) then it completes successfully, but if the OVF template is on the local machine (where the vSphere client is running) then the process consumes all the memory (up to 6GB), the nic cards are disabled for some seconds, all network connections are lost and the import fails saying that VCenter is not responding (obviously, as connection is lost). :smileyshocked:

0 Kudos
taylorb
Hot Shot
Hot Shot

Can you connect the 3.5 hosts and storage to the new 4.1 hosts and vcenter?  If so, this is a walk in the park.  Just Vmotion them to the new hosts and you are done.  If you have new storage you want to use, you can storage vmotion them to the new disks after that.   You can select multiple VMs at a time and 5 minutes of work and and they copy overnight without any down time.

OVF migration is always slow and resource intensive because it is copying, compressing and encapsulating many GB of data. 

0 Kudos
Stefano67
Contributor
Contributor

Unfortunately not, ESX 3.5 are connected to FC storage, while ESXi to iSCSI (do not have FC hba).

But what I'm trying to do is to connect ESX3.5 to iSCSI storage and share at least one LUN, to move

VMs there and then add into the 4.1 cluster in the inventory.

It's a bit tricky as the storage is in a VLAN, I'll keep you up to date. Smiley Happy

0 Kudos
Stefano67
Contributor
Contributor

Well, it works !

I configured the storage VLAN on both switches and created a dedicated vmkernel port also on ESX 3.5 hosts, so they can access the iSCSI storage via software initiator.

I moved some test VMs on a datastore created on a iSCSI LUN associated with the ESX 3.5 cluster, removing them from ESX 3.5 inventory.

Then, from the storage management software, I removed the mapping of the LUN with ESX 3.5 group and added mapping with ESXi 4.1 group, the datastore appeared after a simple rescan.

I added the VMs to ESXi 4.1 inventory and then I moved them from the LUN used for tranfer to other static LUNs dedicated to ESXi 4.1 hosts.

The only annoying part is this mapping-unmapping, because if a added all hosts in the same group then they would share *all* datastores, and I don't like it.

Bye,

   Stefano

0 Kudos
taylorb
Hot Shot
Hot Shot

Stefano67 wrote:

because if I added all hosts in the same group then they would share *all* datastores, and I don't like it.

Whether or not you like it, it is the right way to do it and the software was designed that way.   Give all hosts access to all datastores under the same Vcenter and then you can vmotion and storage vmotion everything with zero downtime.    Your way actually seems more risky.

0 Kudos
mittim12
Immortal
Immortal

I don't think you should give all host access to all the LUNs.   We limit our clusters to just the LUNs the house the VMs that they are running.  We have one LUN that we present to all host that we call a transfer volume.   If we need to move anything between clusters we simply make use of the transfer volume.  This of course is a temporary spot used to facilitate the move.  Once the VM is moved to the new cluster we storage vmotion back over to a permanent LUN.

0 Kudos