VMware Cloud Community
jbsengineer
Enthusiast
Enthusiast

Storage vMotion moving block sizes of 64kb

Can someone tell me what size blocks a storage vmotion should be moving data at?  I need to make sure what I am seeing is working by design.  From what I have read it should be the block size of the VMFS volume and not a sub-level block size.   However, I am seeing 64KB moves.

We probably average 1000 storage vmotions a year and just recently a few performance issues popped up on our radars.  So I'm doing a little digging and found we were taxing some RAID sets with 2000-3000 IOPs at 64kb blocks.

All of our VMFS 5.54 datastores were created fresh and are 1MB block size.  No upgrades.

This happens on migrations within an array, and to adjacent arrays.  We have intentially disabled the VAAI data mover (FYI).  So it should still utilizing the FS3DM datamover.

Any insight is appreciated.  I attached a graph which highlight a write rate of 241648KB @ 3775 write requests equaling 64KB.

ESXi 5.1 Update 2.

Thanks.

svmotionblock64kb.jpg

9 Replies
vfk
Expert
Expert

Does this only happen during storage svmotion? And does it happen for the all the luns?  Have you looked at esxtop for other stats i.e. DAVG, bus resets, outstanding commands, failed IO?

svmotion between will not use VAAI anyway, that will be the default when moving between arrays, there is no point in disabling it.  I would keep it on so that VM particular array can make use of it.

vfk

--- If you found this or any other answer helpful, please consider the use of the Helpful or Correct buttons to award points. vfk Systems Manager / Technical Architect VCP5-DCV, VCAP5-DCA, vExpert, ITILv3, CCNA, MCP
Reply
0 Kudos
jbsengineer
Enthusiast
Enthusiast

VAAI was disabled for other reasons a couple years ago. But, yes I understand no VAAI across arrays.  I was just pointing out that the results are similar when svmotioning between arrays or adjacent arrays.

Yes, all LUNs.  I have looked for anything out of the ordinary with bus resets, outstanding commands, etc and found nothing.  Also combed the vmkernel logs and they are normal.

Reply
0 Kudos
vfk
Expert
Expert

Out of curiosity, what does the storage side show?  Can you correlate what you see on vsphere host and what you see on the storage?

--- If you found this or any other answer helpful, please consider the use of the Helpful or Correct buttons to award points. vfk Systems Manager / Technical Architect VCP5-DCV, VCAP5-DCA, vExpert, ITILv3, CCNA, MCP
Reply
0 Kudos
jbsengineer
Enthusiast
Enthusiast

Storage also reflects what I am seeing.  Same throughput, same front-end IO.

Reply
0 Kudos
MKguy
Virtuoso
Virtuoso

From what I have read it should be the block size of the VMFS volume and not a sub-level block size.

Do you still have any source links for that assumption?

I suppose this is merely a misconception. From what I understand, the VMFS block size is just a filesystem allocation related size. There is no reason why IOs like storage vMotion have to be adjusted to that size.

Similarity, IOs generated by VMs are issued with the original size the guest requested them with (unless they are split by the vmkernel, which rarely happens only with huge IO sizes).

1MB is quite a large IO size as well. Using smaller IO sizes makes sense in the case of storage vMotion because that limits the amount of delta data during the svMotion process: If a single bit in the 1st and last chunk of a vmdk changes, only 128KB instead of 2MB have to be retransmitted. (Though yeah, modern svMotion uses a live write-mirroring approach but I guess similar things apply.)

-- http://alpacapowered.wordpress.com
jbsengineer
Enthusiast
Enthusiast

Correct, that is an assumption.  The sources I had read talked about making sure your VMFS block sizes from your source and destination LUNs are the same otherwise they would perform less due to the block size.  Usually this was referring to upgrading VMFS3 -> VMFS5, etc, etc:

Storage vMotion performance difference? - Yellow Bricks

Blocksize impact? - Yellow Bricks

My thoughts exactly with the large IO size of 1MB, and how the mirroring and new writes are affected during the storage vmotion.

64KB is a little small for what should be a streaming read/write and would prefer to increase it to 128, maybe even 512KB.

Either way, if this is by design, that is fine.  I'm just trying to verify something isn't mis-configured in my stack before I adjust our procedures for migrations, and or find a way to work around the 64KB size.

Reply
0 Kudos
MKguy
Virtuoso
Virtuoso

I just tested it in my environment (ESXi 5.1 U2, VMFS 5.54 with 1MB block size, VAAI disabled) and it's using 64KB IOs as well. So it seems to be really by design. Maybe there is some advanced setting controlling this behavior but I couldn't find anything.

I think a larger IO size might sound more preferable for the svMotion performance itself, but it would also induce more IO latency and could possibly affect other running VMs on the datastore as well.

If you want clarification you should probably reach out to VMware support or the VMware storage guru Cormac Hogan.

-- http://alpacapowered.wordpress.com
Reply
0 Kudos
jbsengineer
Enthusiast
Enthusiast

Thanks for testing in your environment. 

Agreed, I will probably reach out to VMware just to get confirmation. 

Thanks again!

Reply
0 Kudos
vfk
Expert
Expert

please let us know what your finding are...this is interesting topic,storage in vsphere is the beast that is always underestimated.

vfk

--- If you found this or any other answer helpful, please consider the use of the Helpful or Correct buttons to award points. vfk Systems Manager / Technical Architect VCP5-DCV, VCAP5-DCA, vExpert, ITILv3, CCNA, MCP
Reply
0 Kudos