Erik67
Contributor
Contributor

Best practice moving drives from one ESXi server to another

I had a customer that needed a new server fast and I ended up virtualising the old server (the drives were still working) and they are now running under ESXi on an S3210SHLC based server. However this is my server and I want i back:-)

I have now assembled an new server with the same hardware (different housing), but plan to use the controller in AHCI mode instead of IDE (which does not recognise all of the SATA ports). Both servers are booting from USB sticks.

Th old server (mine) is using two independent 500 GB SATA drives and the server VM has vitual drives on both physical drives (some of which are software mirrored by the W2K3 client OS for added security). This works fine.

However, if I simply move the drives to the new server, will the show up as the same datastores and will the VM still find all of it virtual drives so the mirrors does not break?

What do you recommend? Moving the drives only? Moving the drives and the USB stick (and later modify it to use AHCI).

I realize that the correct way is to use VMWare converter, but I hope to be able to du this quickly without hours of waiting for the migration to finish.

The implications of this is also important for disaster recovery since it is reassuring if a drive with a VM easily can be moved from one server to another.

Erik

0 Kudos
4 Replies
Erik67
Contributor
Contributor

Since posting the above question, I have experimented some more.

I can switch back and forth beetween AHCI and IDE (in the BIOS) on the server without any problems as long as both drivers are enabled (AHCI has to be enabled manually in simple.map).

However it is not possible to shutdown the server, physically remove a drive, add it to another server and mount the VMFS-partition. If I try to add it in VI, it will ask to format the drive even though it can see that it already uses the VMFS-filesystem. This should be simple since we are talking about single drives that verifiably works in both computers.

The problem is that if I have a drive full of VMs on a VMFS-filesystem, I cant use that data on another server. That can cause a lot of sleepless nights if it is impossible to access your data in event of a server failure.

There must be a way to mount a drive in the service console (Busybox) without erasing it.

Erik

0 Kudos
khughes
Virtuoso
Virtuoso

I'm guessing by independant drives you mean RDMs? Is it not possible for you to take a box, winSCP or connect to the esxi box somehow when the VMs are powered down, copy down the folder with the vmx/vmdk files and then upload them to the new server and add it to the inventory of the new esxi box?

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
Dave_Mishchenko
Immortal
Immortal

Instead of trying to add storage, go to Configuration \ Storage Adapters and click on Rescan. That may bring up the datastore. If it doesn't, run the command tail -f /var/log/messages at the console, run the rescan again and see if any errors are generated about snapshots.

Erik67
Contributor
Contributor

After a lot of time searching, I finally figured it out.

Before the drive will show up in a rescan, I have to go to Configuration/Advanced Settings and change LVM.EnableResignature to 1.

Then I can do a rescan and the drive will show up. Now I just have to rename the new datastore to something practical and I can browse it and add VMs to the inventory.

There were no snapshots involved.

Erik

0 Kudos