VMware Cloud Community
ShahidSheikh
Enthusiast
Enthusiast

Moving VMs from one ESXi server to another - what a pain!!!

This is where I miss my setup with VMware Server. I do not have a iSCSI or NFS central storage. Before I started the move to ESXi, all my VMware Server (1.0.7) machines were Ubuntu 7.0.4 servers all of which had the directories where VMs were stored exported through NFS and cross mounted on each of the servers. So on each server I had the directory structure:

/vmfs/svm01

/vmfs/svm02

/vmfs/svm03

/vmfs/svm04

Where only one directory was local and the remaining three were NFS mounts from the three other servers. This made moving moving VMs around a breeze. I could even run vmware-vdiskmanager with input and output VMs being on different servers. The only thing that was slow was provisioning thick vmdks.

Now under ESXi moving VMs is a pain. Apparently scp in ESXi does not support the -C (compression) flag. Through scp, I can either scp thick vmdks from one ESXi to another or import them into thin vmdks and then scp them. Either way its a painfully slow process. And scp only gives a maximum thruput of about 2.5 MBytes/sec (20 Mbps) on my 1 GB connected NICs (VM nic is separate from management nic). All servers are either Dell (2850, 2950) or HP 2U (380 G4) servers, all with 4 or more U320 SCSI drives in a hardware RAID.

I was hoping in RCLI I would be able to run vmkfstools.pl script across machines but alas, it only runs vmkfstools operations on a single machine too.

Right now I am trying to move one of my VMs that has a 120 GB vmdk and it says its going to take 15 hours to move. I can reduce the server downtime significantly (I think) by taking a snapshot and moving that first, then downing the VM and moving the deltas but still all of this for doing something I could do fairly painlessly in VMware Server seems a lot of work.

My question is that does anyone have a better way of moving VMs from one ESXi to another? I will eventually have a centralized NFS store but that is not going to happen for another 2 months.

It would be very cool if there was a way to NFS export the local datastores in ESXi.

Reply
0 Kudos
81 Replies
RParker
Immortal
Immortal

This works pretty well, but other than browsing the datastore the way you are doing it is pretty much the same. The file copy does take a while, any way you look at it, but this veeam tool may help you out a little: http://www.veeam.com/download.asp?step=2&license_type=7

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

When you say "This works pretty well.." what are you referring to? SCPing the vmdks around?

Didn't think FastSCP worked with ESXi unless you had VC.

Reply
0 Kudos
alex555550
Enthusiast
Enthusiast

Hy,

have you looked at the VMWARE Converter?

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Yes...it can be slow. I got 10Mb/s the last time I moved a VM via SCP. Vmware Converter is faster as it maxes the line (well almost), if you are moving small VM's scp'ing is great, but large ones are not fun. I am in the same position as you are thankfully I dont move my VM's around that much.

The good news is that FastSCP is currently working on a new version of their product that will be fully compatitable with ESXi and you wont need to turn SSH on. They are currently in Beta and are hopefully going to have the product ready for January. I know this doesnt help you RIGHT NOW, and I wish there was a faster way right now to move stuff, but hopefully in the future someone (or VMware) will do something for us ESXi customers. It would be nice of them.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
RParker
Immortal
Immortal

When you say "This works pretty well.." what are you referring to? SCPing the vmdks around?

Didn't think FastSCP worked with ESXi unless you had VC.

You mean SC, and that's right, I forgot you were using ESX3i... I wish these vendors would come out with compatible tools for ESX3i.... It would make things much better

Reply
0 Kudos
sant0sk1
Contributor
Contributor

I am right there with you on this one, RParker. Backing up my ESXi guests from one ESXi server to another is taking 10+ hours using scp as it maxes out at ~ 8MB/s. Any advice on a better process or a tool to help would be much appreciated.

This FastSCP which was suggested above seems to be for ESX. Can anybody confirm that it works on ESXi?

Reply
0 Kudos
kpc
Contributor
Contributor

Hi Sheikh

Not sure why you're only getting 2.5MB/s. I have noticed that you get this speed when writing to an NFS share but not when doing a normal SCP from local datastore to new ESXi. I do an export locally to another datastore, then SCP the files to another ESXi server, just doing an SCP I get proper speeds, around 20MB/s. How long does it take you just to do an export of your 120GB?

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Hy,

have you looked at the VMWARE Converter?

Yeah, I use it frequently. But haven't figured out a way to get the Converter to upload to an ESXi version in the non licensed version. Didn't think it could be done.

I basically use Converter to convert a physical machine into 2GB sparse vmdk files for the VMware Server 1.0.x format then copy the vmdks up to the ESXi server and then use vmkfstools to import it.

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Hi Sheikh

Not sure why you're only getting 2.5MB/s. I have noticed that you get this speed when writing to an NFS share but not when doing a normal SCP from local datastore to new ESXi. I do an export locally to another datastore, then SCP the files to another ESXi server, just doing an SCP I get proper speeds, around 20MB/s. How long does it take you just to do an export of your 120GB?

You're getting 20 MBytes/sec? I.e. 160 Mbps?

Why are you exporting to another datastore locally first? To convert it into thin?

Reply
0 Kudos
sant0sk1
Contributor
Contributor

Here's a crazy idea.

What if we just replace ESXi's built-in scp with an scp binary that supports compression and isn't crippled in any fashion. Would it be able to run on ESXi's linux O/S, or does it need to be compiled for that specific kernel?

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Oh! and I forgot to mention that using the download (haven't tried upload yet) option on the Datastore Browser in the VI client does get me much better speed. Maxes out at about 25 MBytes/sec (200Mbps.) Can get about 20 MBytes/sec sustained. Still for a 120 GB VM that is too much time. If I first convert it into thin I spend that much more time in the conversion process.

I wish there was less cumbersome way of moving VMDKs from one local datastore to another. Would have been nice to be able to take the thin output of the vmkfstools piped into scp (or better yet netcat) which then is piped back into vmkfstools on the destination and have it converted back into thick.

Something like that in conjunction with snapshots can enable moving VMDKs in the least amount of time while requiring little extra diskspace.

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Here's a crazy idea.

What if we just replace ESXi's built-in scp with an scp binary that supports compression and isn't crippled in any fashion. Would it be able to run on ESXi's linux O/S, or does it need to be compiled for that specific kernel?

Needs to be compiled for the specific kernel. Plus I am not sure that its scp that is limited in any way. Its the amount of resources the vmkernel allows the console to have which I believe limits the performance of scp. If you had a way of copying that didn't involve encryption, I'm sure the speed would be faster.

And if that would be possible I would run NFSServer on ESXi so I could simply export the datastores themselves.

The function I would really love to see in future versions of ESXi is to be able to export/mount local datastores/vmfs file systems to other ESX/ESXi server. If corruption is a concern then some sort of safety mechanism can be put in place that the exported filesystems can be read by any utility but only written to by vmkfstools.

Reply
0 Kudos
glim
Contributor
Contributor

Would anyone be interested in an rsync binary that can be used on esx3i?

It may not be faster, but it is restartable.

Reply
0 Kudos
kpc
Contributor
Contributor

I export to a local datastore purely to speed up the whole process and it's only temp. orginally I mounted an nfs share using the VI client and exported to that - jesus was it slow!!! I tried loads of different settings in the nfs config but none increased the speed. And yes I meant 20mbps bits not bytes Smiley Happy This is the way I do my backups. Export locally, SCP that pulls the files off the server to a NFS mount on the serverthat is running the backup script. However 120GB is a large VM, only have one like it running on ESXi thankfully

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Would anyone be interested in an rsync binary that can be used on esx3i?

It may not be faster, but it is restartable.

Yeah I would like to try it out. Which binary are you using? I.e. from which distribution?

I was thinking of trying out binaries from BusyBox to see if I can get any of them to work in the console.

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

I export to a local datastore purely to speed up the whole process and it's only temp. orginally I mounted an nfs share using the VI client and exported to that - jesus was it slow!!! I tried loads of different settings in the nfs config but none increased the speed. And yes I meant 20mbps bits not bytes Smiley Happy This is the way I do my backups. Export locally, SCP that pulls the files off the server to a NFS mount on the serverthat is running the backup script. However 120GB is a large VM, only have one like it running on ESXi thankfully

Very interesting. So it may be writing to the mounted NFS volume that was slow. That was exactly how I imported my VMs from my Ubuntu boxes. Mounted stores from the Ubuntu/VMware Server 1.0.7 boxes using NFS on my ESXi boxes and ran vmkfstools -i to import vmdks into the localstore. And that import was fairly fast. Didn't think about looking at the speed at that time. My source vmdks were thin.

Reply
0 Kudos
glim
Contributor
Contributor

Ok. I'll get it and post it for you.

It's not from a specific distro, though it was compiled on Slackware-11.0, though I doubt that specific version matters.

It is a completely standard rsync-3.0.3 built from source.

The only things you need to do to build it yourself if you don't want to trust a binary from some random person on the net:

1. Build statically. Your libraries are not going to be around on the target ESX box.

2. Build without TLS support. The mgmt kernel doesn't do TLS, so your binary cannot either.

3. Optional: strip the binary when done.

To use this:

1. All of this is unsupported. Take responsibility for your own actions.

2. Enable unsupported sshd to your ESXi system.

3. Place this 'rsync-static-stripped' binary into /bin or somewhere else in your path on your ESXi system.

4. You will probably want to rename this binary to just "rsync", so that you don't need to specify it on the commandline.

Note that you should probably not try to rsync any files that are currently in use.

There is no warranty and no guarantees of any kind.

Reply
0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Got it. Many thanks. Will try compiling it myself too.

Reply
0 Kudos
kpc
Contributor
Contributor

Interesting Glin, have you tried a new build of NFS in the same way as rsync? May be able to get past the slow export to NFS share.

Reply
0 Kudos