VMware Cloud Community
Lysinger
Contributor
Contributor

Upgraded from ESXi 3.5 to 4.0 and VMFS 4 to 7 conversion is taking a really really long time

I'm a networking guy by trade, so please go easy on me.

Problem:

Moving VMs from one disk to my new raid array storage is taking 10 hours a VM. 3 are thin client provisioned and would be 250GB in size if they were thick provisioned. Once I copied a few over and filled my new raid array, I learned that I still need to convert the VMs from VMFS 4 to 7 or they won't fire up. The standalone converter says it will take 1 minute to convert (for hours and hours).

I tried to convert one VM on the raid array and stopped it after 12 hours. I am now trying one on the SATA storage for the original drive with the raid storage as its destination and I have had one going for 36 hours now and it still says one minute to convert.

Is this normal?

If it is, I can live with it, but is there a way to get any sort of status on the conversion? Looking at the datastore browser, the 250GB file was made pretty quickly, but the conversion still says running.

Will 4.1 be any faster and would it be better to upgrade to it before converting my next 250GB VM?

All help is greatly appreciated.

Scenario:

ESXi 3.5 white box.

Core 2 Duo E8400 4gb ram with one SATA hard disk.

Slow once 2 VMs are running and slow as dirt once 3 VMS or more are fired up due to single disk drive. Otherwise ran great.

Bought an LSI8308 RAID card and 3 new 500gb drives. Configured a virtual disk with RAID 5 default settings

Installed ESXi 4 on the raid array.

Learned how to use the datastore browser and add the VMs

Found out I have to convert from VMFS 4 to 7. Also found out that thin provisioned clients will copy or convert to thick when moving to different disk block sizes. No biggie, I can buy 2 more drives and get more disk performance to boot.

0 Kudos
6 Replies
athlon_crazy
Virtuoso
Virtuoso

I'm a networking guy by trade, so please go easy on me.

Problem:

Moving VMs from one disk to my new raid array storage is taking 10 hours a VM. 3 are thin client provisioned and would be 250GB in size if they were thick provisioned. Once I copied a few over and filled my new raid array, I learned that I still need to convert the VMs from VMFS 4 to 7 or they won't fire up. The standalone converter says it will take 1 minute to convert (for hours and hours).

We normally refer to "Virtual machine" upgrade from v 4 to v 7 instead VMFS. So far, for vi3 ESX we use VMFS v 3.33 but I'm not sure for vSphere 4.1 (3.46?) .

I tried to convert one VM on the raid array and stopped it after 12 hours. I am now trying one on the SATA storage for the original drive with the raid storage as its destination and I have had one going for 36 hours now and it still says one minute to convert.

Is this normal?

What tools you are using for the conversion and is it Converter standalone? For local migration regardless SATA or what raid, 36 hours is way too long.

http://www.no-x.org
0 Kudos
Jackobli
Virtuoso
Virtuoso

This raid controller is already listed as "obsolete" on the homepage of LSI. Did you buy a new one or an used?

Is this controller equipped with a MegaRAID® LSIiBBU01 battery-backup?

If not, there is most probably no write cache activated and ESXi has to wait for any write command until completion.

As Athlon_crazy writes, the time for conversion seems way to long, but Converter and other tools are using reservation. So the disk file seems to be created really fast. Instead, it's a sparse file and will be filled with data somewhat later.

If your VM are depending on a fast disk, you should mention about RAID10 and perhaps more drives or other drives (SAS 10K instead of SATA).

Lysinger
Contributor
Contributor

I am using the standalone converter 4.0.1 build 161434

0 Kudos
Lysinger
Contributor
Contributor

I bought the controller new from a liquidator. LSI's latest model was $700 more than I wanted to spend and still get 8 SATA ports. I have a pretty strict budget.

The controller has a socket for the battery backup, I have not purchased one for it.

I appreciate the info on the write cache and battery backup, about how much will this speed up the controller's performance?

SAS 10K drives are noisier and pricier than I can afford. Adding 2 more SATA drives is in the budget though.

0 Kudos
Lysinger
Contributor
Contributor

As an update, the conversion did complete in about 40 hours according to the converter. The VM fired up just fine. I guess I need a couple more drives or some wet paint to watch dry.

Thanks!

0 Kudos
Jackobli
Virtuoso
Virtuoso

The controller has a socket for the battery backup, I have not purchased one for it.

Should be around $ 130.00

I appreciate the info on the write cache and battery backup, about how much will this speed up the controller's performance?

Depends on the write behaviour of the Guests.

There were some threads about HP (Smart Array P400/410) that did gain quite a lot (from 4-8 MB/sec to 20 MB/sec).

SAS 10K drives are noisier and pricier than I can afford. Adding 2 more SATA drives is in the budget though.

I see (and know). More spindles are usually better.

0 Kudos