VMware Cloud Community
WillL
Enthusiast
Enthusiast

Storage vMotion: write twice data at destination than read at source

When storage vmotion, esxtop shows MBWRTN/s is twice of MBREAD/s under disk device.

datastore1: local, block size was 1MB, reformat to 8MB

datastore2: iscsi lun, block size 8MB, target is windows 2008 r2, initiator is dependent iscsi hba (broadcom 5709)

it happens either way, used task manager at iscsi target side to monitor the total transfer size:

if vm size is 10GB

iscsi lun -> local lun, iscsi target sent 10GB

local lun -> iscsi lun, iscsi target received 20GB

also tried "cp" file between two datastores, the same as above, write twice the size than read.

any thoughts? thanks.

Reply
0 Kudos
8 Replies
WillL
Enthusiast
Enthusiast

anyone else seen this before?

Reply
0 Kudos
Walfordr
Expert
Expert

I have not seen MBWRTNS/s being twice MBREAD/s  before.

Maybe if you provide some more details someone will be able to help:

1. Which version of ESXi are you using?

2. Is your iSCSI volume configured for multiple paths?

3. Are you expierencing poor storage vMotion performance?

4. What are the local and iSCSI disk type and RPM speed.

5. Can you create another iSCSI lun and storage vmotion/copy between those two to see if the results are the same?

6. Can you create two local datastores and storage vmotion/copy between them to see what the results are.

*5 & 6 should help you locate the bottleneck that is causing the issue.

I see that you already found this post: http://communities.vmware.com/message/1652868

I'll try to test this in my lab.

Robert -- BSIT, VCP3/VCP4, A+, MCP (Wow I haven't updated my profile since 4.1 days) -- Please consider awarding points for "helpful" and/or "correct" answers.
Reply
0 Kudos
WillL
Enthusiast
Enthusiast

> 1. Which version of ESXi are you using?

4.1 U1

> 2. Is your iSCSI volume configured for multiple paths?

yes, only one path is active, the other was disabled for troubleshoot

> 3. Are you expierencing poor storage vMotion performance?

40MB/s not too bad, but only half of bandwidth.

> 4. What are the local and iSCSI disk type and RPM speed.

It's a home lab, all SATA, local 7200rpm, iscsi is raid 10, 5900rpm (plan to upgrade to 7200)

> 5. Can you create another iSCSI lun and storage vmotion/copy between those two to see if the results are the same?

I tried last night, the same "issue". vmfs same block size 8MB.

> 6. Can you create two local datastores and storage vmotion/copy between them to see what the results are.

Good suggestion, I will find another hard drive to try this.

> I see that you already found this post: http://communities.vmware.com/message/1652868

But in my case, even with same block size, didn't resolve this.

I will also try software initiator instead of hardware.

Thanks.

Reply
0 Kudos
Walfordr
Expert
Expert

I was able to test this in one of my labs.

This lab is built in VMware Workstation. Running ESXi 4.1 u1.

I added a new datastore with a different block size and the MBWRTN/s and MBREAD/s had the same results as you. Ratio 2:1

I then added a 3rd datastore with the same block size as the 2nd one and transfer to/from those 2 datastore were still 2:1.

I then bumpted all datastores down to 1m matching the first and still was not getting the 1:1 transfers.

I'll restart the host tomorrow to see if it makes a difference.

Your iSCSI disks speed could be at fault but testing with a second local DS did not provide me with any conclusive answer.

I think this is the best explanation that I have seen for this behavior.  http://www.yellow-bricks.com/2011/02/24/storage-vmotion-performance-difference/

Robert -- BSIT, VCP3/VCP4, A+, MCP (Wow I haven't updated my profile since 4.1 days) -- Please consider awarding points for "helpful" and/or "correct" answers.
Reply
0 Kudos
depping
Leadership
Leadership

thanks for the link

Duncan

Yellow-Bricks.com

vSphere 5 Clustering Deepdive - eBook | Paper

Reply
0 Kudos
WillL
Enthusiast
Enthusiast

I read Duncan's blog before, difference block size between two datastores was causing the slow down, but I didn't see any change after reformating to the same block size, still 1x read 2x write.

How to tell which datamover is being used?

Reply
0 Kudos
Walfordr
Expert
Expert

William wrote:

I read Duncan's blog before, difference block size between two datastores was causing the slow down, but I didn't see any change after reformating to the same block size, still 1x read 2x write.

How to tell which datamover is being used?

I have not found out how to tell which datamover is being used.

This KB mentions when the high performance datamover is not use:

http://kb.vmware.com/kb/1012159

"When I/O operations are done between storage that is managed by the VMkernel (such as a VMFS or NFS datastore) and storage that is managed by the service console (such as an EXT3, CIFS, or NFS mount point), a high performance data mover is not utilized and performance is degraded."

--------------------

I did some more tests.

-Using cp or mv from SSH always produce higher MBWRTN/s.  This is regardless If I am copying the VM entire folder or just the *.vmdk files.

-If I use the move or copy option from the Datastore Browser on the vmdk file I get a 1:1 ratio of MBWRTN/s:MBREAD/s, but If I move/copy the entire VM folder I get 2:1.

These tests were done connecting directly to the single ESXi host.

Robert -- BSIT, VCP3/VCP4, A+, MCP (Wow I haven't updated my profile since 4.1 days) -- Please consider awarding points for "helpful" and/or "correct" answers.
Reply
0 Kudos
algoeit
Contributor
Contributor

I have the same issue on a brand new setup with esi 4.1u2 and Netapp NFS datastores. the storage vmotions are taking ages before completion. And yes it looks like it is vmotioning a think disk rahter then the thin disk.

Must be a know issue. Did you ever solved this problem.

Much appreciated.

Reply
0 Kudos