Highlighted
Enthusiast
Enthusiast

Slow performance with thin disk copying vsphere4

I have this odd problem that I can easily replicate and I wonder if anyone else has this issue.

Scenario 1:

Clone a powered off VM that is 20GB thin provisioned using 7.28GB to the same datastore where it resides, format same as source.

Result:

Write speed: 30MBps as measured using Perfmon connected to the HP EVA

Scenario 2:

Clone a powered off VM that is 20GB thick provisioned to the same datastore where it resides, format

same as source.

Result:

Write speed: 170MBps as measured using Perfmon connected to the HP

EVA

I have the same performance problem when cloning a machine that has a snapshot. What I don't understand is that according to this document: Performance Study of VMware vStorage Thin Provisioning There should be no performance impact for reading a thin provisioned disk and writing a new disk should be of similar performance weather it is thin provisioned or not. The only penalty would come from writing to a thin disk which in this case is not happening. Has anyone else noticed this issue, is cloning an thin disk just as fast as a thick in your environment? It is obvious by this that it is not a LUN issue, have have tried multiple LUNs and all exhibit the same behavior. Also I don't think it is an issue with the EVA as it clearly has no problem pushing 150MBps+ every time I have copied a thick disk.

I am running BL460c-G1 blades connected to an HP EVA. ESXi 4.0.0, 219382.

Thanks!

0 Kudos
2 Replies
Highlighted
Virtuoso
Virtuoso

This seems pretty much like the issue in http://communities.vmware.com/message/1514339, which I am experiencing on EVA4/8000 too.

For the moment, seems like I can just hope for the upcoming U2 to improve this.

-- http://alpacapowered.wordpress.com
Highlighted
Enthusiast
Enthusiast

After talking with VMware support it looks like this is an issue with data manager.

1) test thick to thick on the same VMFS volume and record the time to complete:

time vmkfstools -i /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thick type dest/destination.vmdk

real 0m 41.85s

user 0m 5.72s

sys 0m 0.00s

2.) test thick to thin on the same VMFS volume and record the time to complete:

time vmkfstools -i -d thin /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thin type dest/destination.vmdk

real 6m 8.24s

user 0m 11.31s

sys 0m 0.00s

3.) Disabled datamover (esxcfg-advcfg -s 0 /VMFS3/CloneUsingDM)

4.) test thick to thick on the same VMFS volume and record the time to complete:

time vmkfstools -i /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thick type dest/destination.vmdk

real 5m 51.12s

user 0m 25.74s

sys 0m 0.00s

5.) test thick to thin on the same VMFS volume and record the time to complete:

time vmkfstools -i -d thin /vmfs/volumes/thick type source/source.vmdk /vmfs/volumes/thin type dest/destination.vmdk

real 5m 38.00s

user 0m 25.58s

sys 0m 0.00s

As you can see by the results if you disable datamover the speed is equalivant for thick-thick or thick-thin. With it enabled thick-thick is really fast 41sec vs thick-thin 6m.

I also have an EVA4400. I am going to try this on my netapp device and see if I experience the same problem.

0 Kudos