The time has come to take my atrociously slow RAID5 array and turn it into a RAID10, change the controller options, etc. I'm currently running Window 2003 R2 Server X64 as a host with 4 Guest OS. (SBS 2003, Accounting, Terminal Services, and Antivirus) I'm planning on also virtualizing a machine running on old hardware that is still overpowered for the job it does (Concrete quality control and BES Express 3 users).
My main concern is startup/shutdown time. I can get all the machines shutdown in 5 minutes, but it's nerve wracking with the UPS alarms going off and hoping your Exchange database doesn't bite the dust. Startup takes a good 10-15 minutes. Users start trying to sign in long before everything is back up and are torturing me on the phone about how the power has been on for 10 minutes.
I'm hoping that since ESXI is so light that it will boot and down much faster and without the overhead of Windows 2003 Server X64 as a host it will have better performance.
I'd like to run a test on the VM's to see how long it takes to convert them from their Acronis TI backups, and for the SBS server - from a shutdown. (The backup is over 300 GB and I'd prefer to be certain that Exchange is completely shutdown and all that good stuff.) When iI go to run the converter it tells me I need an ESXI machine to select as a target - I'd prefer to do this long in advance and offline since I only have one server, so I can't migrate until the last minute.
I'm not really sure how you can convert the VM to a format that is compatible with ESXi and test it on ESXi without an ESXi host (if I'm understanding your question correctly).
In my experience the speed that a VM boots up is most heavily dependant on the storage underlying the VM. If it's on fast storage it tends to boot up faster and can tolerate more VMs booting at once. The CPU and memory of the server also comes into play but from what I've seen over the years it almost all comes down to storage.
The Windows OS is likely contributing to the slower bootup but it is probably not a major contributing factor. I suspect it is your underlying storage that is causing the delay.
Can you find another system with similar hardware and a similar RAID configuration that you can use to test the boot time? If so that would be my suggestion.
I was trying to gauge the time it took to run the switchover so I could plan for the service interruption. I might be able to put it on a fast PC, that's as close as I have to this server - won't be on the approved list for sure.
The storage is the main contributor to the poor performance. I'm running a HP Smart Array e200i RAID card with BBC. I'm running RAID5 and the E200i card is notorious for having an issue with RAID 5 and runs very slow. Switching to RAID 10 (2 sets of 3 disks and a hotspare) will give me an overall improvement in speed simply from striping, while maintaining redundancy and quick changes with the mirroring and hotspares. There are also a few caching settings, stripe sizes, etc that need to be changed and I fear that just firmware and the config changes will likely kill the RAID 5 anyhow. Back in the day I thought RAID 5 was the cat's a$$, and didn't know anything about 1+0.
The Windows-based host is not the whole problem, but is just another part of it. Factor in the fact I'm running VMware server 2 as well.
It's tough to estimate the time it will take to perform the switchover without the target system available for obvious reasons. You can probably convert the whole VM to a disk file rather than trying to go directly to an ESXi host but that may not give you a representative example of how long it will take.
You should also factor in that if you're running applications like Exchange/SQL (which you may be on SBS) then you'll want to shut down those services during the conversion to make sure the data remains intact and up to date. Admittedly I haven't done a P2V on an Exchange server in a very long time (easier to simply rebuild and migrate) but that was the guidance in the past. So consider that when planning this as well.
I wish I had a better answer for you. The best advice I can give you is to try to find other hardware that closely matches what your target system will look like and test the conversion that way. If that isn't possible, then estimate on the high side and assume it will take several hours if you have hundreds of gigabytes to convert.
Wish I could give you more precise guidance.
If you run the VMs with one-piece preallocated vmdks they are already ESXi compatible. No need to spoil them with Converter !!
A small edit with a texteditor does the job as well and does not mess with the MAC addresses and so on - read my notes
If you use a different format than monolithicFlat use vmware-vdiskmanager.exe to convert the vmdks.
This might be the answer! I do have an older machine that's currently running two VMs in another part of our site. I could grab the images off of that machine and test. I'll get back to you as to how it went. The only issue I might have is that I don't allocate the entire disk at creation time, so I'm not sure if that will be an issue.
Only one piece preallocated vmdks are ESXi compatible.
This one piece preallocated vmdks come as "monolithicFlat" or "vmfs"
If you use "monolithicSparse" you have to use vmware-vdiskmanager first
I picked up some used Dell 2950's which have a better RAID controller than my actual current server, so they're going to become the new host. I've run through everything but the PDC because I haven't had a big enough time window, but it looks as if I'm going to run converter hot. I actually tried running it on a powered off machine and it hung at 1%. Running converter hot took about 30 GB in about an hour using ESXI running on my desktop PC inside of VMware Player, so I'm hoping to see some better times off of the full blown server.