When Virtualising a server with one or more physical disk, is there a best practice for the other disks. Should they be included in the P2V or seperately migrated from the OS?
Many Thanks
Matt
Matt,
There should be no issues when converting multiple disks with VMware Converter, as the functionality is included.
It is up to you if you want to migrate the OS drive, then migrate the secondary drive afterwards.
Nice one, cheers, are there any benefits to doing it one way or another? I was advised to migrate the drives seperately, but without a good explanation!
As long as Converter does it's job, and the Active/OS drive is correct and was formatted by windows ... then it would be WAY easier to migrate with Converter. It's the big "Easy Button".
Doing it separately is more like a secondary plan for troubleshooting.
Best Practices for Troubleshooting VMware Converter - http://kb.vmware.com/kb/1004588
Sound like a plan?
Doing large data disks with Converter can be quite slow - so using ghost, acronis, robocopy or whatever may be a good plan B
___________________________________
description of vmx-parameters:
Yeah, I think it's about what's on the disk. I'm virtualising oracle servers atm and as much as I love Robocopy, I'm not sure I'd use it in that instance. 2 x 150Ggb Dynamic disks (with Oracle DB's on) are taking somewhere close to 14 hours atm, Annoying cos I gotta get up and get the server online by 9am and there's still 7.5hrs to go!
If you can bring down the server for the duration of the P2V process, use Cold Cloning method in which you use VM Converter bootable CD (assumed you have VMware Conveter Enterprise license). It will use block level copy to destination. Another way to improve the process is to use Gigabit Network with 2 NICs teamed end to end from source machine to destination machine. The benefit is that the copy process is done outside Windows OS which has 36MB/s limit on file copy process and I don't think robocopy will bypass that limit. My bench mark was cold cloning 70GB of data on a single 100Mbps link on both ends took about 3 hours. So theoratically if you have full duplex 2 x 1 Gbps NICs teamed you should be able to Cold Clone 300GB in less than 3 hours.
It's better to clone all disk at once unless you want to move the the data to different data store. If you have enough space in your environment, you could move the data at a later stage, if you want to.
If you must use hot cloning, it could be a lot slower but with the above suggested network configuration, it should not take more than 6 hours.
Good luck and let me know how it goes.
Matt,
do you have 2 threads for the same topic going? I think I replied to both _
Hope all went well.
Yeah, sort of, they were slightly different, but I coulda just made do with one!
First oracle migration last night, don't know what all the fuss was about, shut down all the oracle services so no one could connect to it and away it goes. Unfortunately it was hot cloned, so it took forever (14hrs) But it worked fine.
I was going to do a clod clone one today, but I get a "Can't detect guest operating system" when I try, I think it's beacuse of the Dell system partition at disk 0, but I'm constrained by time so I've got to Hot Clone it.
Converted multiple disks across 3 datastores and all went fine.
Thanks for your help everyone
Cold Clones not detecting the guest os is mostly because of a lack of drivers
Some newer dell systems, poweredge 1900/1950/2900/2950, use SAS controllers, via PERC 5e/i or 6e/i and you need to provide them to the boot disk.
I've had to do that a number of times, petool can integrate it directly to the ISO or you can F6 to add the TXTDRIVER.OEM.
> Cold Clones not detecting the guest os is mostly because of a lack of drivers
or because there are entries in boot.ini for more than one OS
___________________________________
description of vmx-parameters:
Ah, ok, that's probably it then. I got the PE tool with the iso, but I've not got a clue how to use it! I need to add the drivers into the iso?
Brilliant! Thank you very much, that worked a treat!