VMware Cloud Community
jasonah123
Contributor
Contributor

vSphere + HP EVA 8100 + cloning thin vmdks = very slow!

I'm wondering if anyone else has seen this problem. Essentially, anytime I clone a vmdk and convert it to thin or anytime I clone a thin vmdk and leave it as thin, I get terrible performance. For example, I have been testing with a 10GB thick vmdk. If I clone it thick it takes me about 20 seconds. If I clone it thin it takes about 2 minutes and 30 seconds. Esxtop is showing extremely high latency coming from the array when I am am cloning to thin vmdks. I am using vmkfstools primarily for the testing but it doesn't matter how I do it or what datastore I am hitting.

Anyone else seen something similar? I have cases opened with both VMware and HP but no resolution as yet. To me it appears to be an HP issues as that's where I am seeing the latency but they aren't convinced.

Jason Horn

http://virtuallygone.wordpress.com

0 Kudos
9 Replies
golddiggie
Champion
Champion

What's the VM's performance with either thick or thin provisioned VMDK files?

I was running with everything thin provisioned in my ESXi lab, but then heard from someone else that they leave the VM boot drives thick, with additional volumes thin. This is in a company setting, where they have very fast storage. I decided to give it a try in my lab and did notice improved performance inside the VM's after converting the boot drives to thick provisioned. I'm also seeing that a deploy from template task, where the template is using a thick provisioned drive, is much faster than when it's a thin provisioned drive. The thin provisioned templates take anywhere from 3x to 5x longer to create compared to thick provisioned templates. This is on a local datastore/LUN (2 7200rpm SAS drives on a PERC 6i controller in a RAID 1 configuration).

This comes back to the statement provided by the instructor on thin/thick provisioning vmdk files with ESX/ESXi 4... Basically, use whichever makes most sense for your environment, and with your hardware. There's no offical set parameters for where it is advised to use either. Basically, you use thin provisioning where you can, and have no performance hit. Otherwise you use thick provisioning unless you need the space provided you by going to thin provisioning. Balancing those two items can be a bit tricky. I have moved to the model of thick C drives, thin additional drives within VM's. I might thick provision the SQL 2005 server secondary drive too, since that's where SQL is installed and the db's are kept. It just means you also need to keep a close eye on your storage (if you weren't already doing so).

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

0 Kudos
jasonah123
Contributor
Contributor

The vms performance is just fine after it is created. The issue is only during create operations such as cloning.

Jason Horn

http://virtuallygone.wordpress.com

0 Kudos
MKguy
Virtuoso
Virtuoso

I can confirm the same issue on HP EVA 8000 and EVA 4000 (and local SAS storage of the host).

I have exactly the same issue when cloning/storage vmotioning thin provisioned VMDKs. Thick VMDKs migrate blazingly fast while thin VMDKs are extremely slow.

What’s also weird is that esxtop shows very high storage device latencies during thin VMDK operations, even though neither throughput nor IOPS are anywhere near excessive.

-- http://alpacapowered.wordpress.com
0 Kudos
jasonah123
Contributor
Contributor

In my environment I actually have two separate issues. There is a VMware side issue that does make thin provisioning slightly slower that will be fixed in an upcoming update. However, even with that fix, I still see issues against my EVA. With the fix in place provisioning thin or thick to local storage is fairly equivalent.

Cloning a 10gb disk:

Thick to EVA: 20.56s

Thin to EVA: 4m 22.55s

Jason Horn

http://virtuallygone.wordpress.com

0 Kudos
JasonBurrell
Enthusiast
Enthusiast

After talking with VMware support it looks like this is an issue with data manager.

1) test thick to thick on the same VMFS volume and record the time to complete:

time vmkfstools -i /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thick type dest/destination.vmdk

real 0m 41.85s

user 0m 5.72s

sys 0m 0.00s

2.) test thick to thin on the same VMFS volume and record the time to complete:

time vmkfstools -i -d thin /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thin type dest/destination.vmdk

real 6m 8.24s

user 0m 11.31s

sys 0m 0.00s

3.) Disabled datamover (esxcfg-advcfg -s 0 /VMFS3/CloneUsingDM)

4.) test thick to thick on the same VMFS volume and record the time to complete:

time vmkfstools -i /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thick type dest/destination.vmdk

real 5m 51.12s

user 0m 25.74s

sys 0m 0.00s

5.) test thick to thin on the same VMFS volume and record the time to complete:

time vmkfstools -i -d thin /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thin type dest/destination.vmdk

real 5m 38.00s

user 0m 25.58s

sys 0m 0.00s

As you can see by the results if you disable datamover the speed is equalivant for thick-thick or thick-thin. With it enabled thick-thick is really fast 41sec vs thick-thin 6m.

I also have an EVA4400. I am going to try this on my netapp device and see if I experience the same problem.

0 Kudos
MKguy
Virtuoso
Virtuoso

Just wondering, does anyone have an update on this?

-- http://alpacapowered.wordpress.com
0 Kudos
a_p_
Leadership
Leadership

Take a look at http://kb.vmware.com/kb/1023768

Cloning or migrating a virtual machine with thin as the destination file format is significantly slower than cloning virtual machines with the destination file as thick. This issue occurs due to the large number of operations involved in the cloning of a thin virtual disk.

Installing this patch reduces the operations involved in cloning targets with thin file formats and results in a faster thin to thin disk cloning.

André

0 Kudos
MKguy
Virtuoso
Virtuoso

Well, this patch was released for 4.0 only. I'm on 4.1 now and still experiencing this issue. Not sure if this was supposed to be fixed with 4.1 or if there is still a 4.1 patch to come.

-- http://alpacapowered.wordpress.com
0 Kudos
a_p_
Leadership
Leadership

Since this patch was released just a few days ago (Sept., 30th), I'm quite sure this issue will be fixed in the first/next patch for ESX(i) 4.1 too.

André

0 Kudos