FREDYz
Contributor
Contributor

The right blocksize for the underlying storage

Is a 8K blocksize on the SAN level ideal for virtualised environments ? Because the files on these LUNs(VMDKS) are normally quiet big file, should I consider to use a bigger block sizes like 64K or even 128K. If I understand correctly this might give slightly more latency to read/write data, but in the other hand can decrease significantly the number of IOPS to the disks.

Most operations are normal read/writes from the Virtual Machines and the default 8K block size(on the SAN level) may work fine for that, but when you are doing IO intensive operations like deploying a new machine from template, Cloning or Storage vMotion(which will happen more often now with Storage DRS) so more sequential operations I beleive larger block sizes can be very beneficial.

On VMFS (Verison 5) level, the default block size is 1MB, so for each operation by the guest OS or directly to the Datastore it will generate 128 IOs if using 8K block size on SAN level instead of only 8 IOs if using 128K block size ?

Thanks

0 Kudos
9 Replies

My View to your first paragraph:
Yes, 8K block size is ideal for VMFS 3 environments.
You can't consider to use 64K or 128K. There is no such option avail.

My View to your second paragraph:
I completely agree with your words.

My View to your third paragraph:
I don't understand the question. Could you please rephrase it?
~GaneshNetworks™~ If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
0 Kudos
FREDYz
Contributor
Contributor

No Ganesh, I am talknig about the underlying storage, the SAN, when you create the LUN at the SAN you can set the blocksize for that block storage device.

And about the VMFS I am talking about VMFS 5.

Thanks.

0 Kudos
arturka
Expert
Expert

Hi

Like for me you should read storage vendor documentation, cause every vendor has their own best practice regarding set up storage to work efficiently with ESX

VCDX77 My blog - http://vmwaremine.com
0 Kudos
rickardnobel
Champion
Champion

Fernando wrote:

On VMFS (Verison 5) level, the default block size is 1MB, so for each operation by the guest OS or directly to the Datastore it will generate 128 IOs if using 8K block size on SAN level instead of only 8 IOs if using 128K block size ?

There is no relation actually between the VMFS block size and the size of the IO from a VM to the physical disk. If the guest VM issues a 4KB Read request there will be a 4KB read against the SAN. It does not matter what the VMFS block size is.

My VMware blog: www.rickardnobel.se
mcowger
Immortal
Immortal

^^^^ This is the correct answer.  VMFS blocksize is unrelated to the IOs sent to the array.  If you want to tune IO sizes for your array, do it within the application and guest.

--Matt VCDX #52 blog.cowger.us
FREDYz
Contributor
Contributor

Right, but what about for operations like Storage vMotion or Clonning which has nothing to do with inside the guest ? When it's copying very large files it will indeed generate a lot of IOPS. So increasing the blocksize at array level is a way I was hope to alliviate this need of amount of IOPS.

Would you think 32K or 64K would be reasonable for this ?

0 Kudos
rickardnobel
Champion
Champion

Fernando wrote:

Right, but what about for operations like Storage vMotion or Clonning which has nothing to do with inside the guest ? When it's copying very large files it will indeed generate a lot of IOPS. So increasing the blocksize at array level is a way I was hope to alliviate this need of amount of IOPS.

The amount of IOPS from the ESXi host should be the same no matter the block size on your SAN, since the host is unaware of the physical layout of your disks? As for the specific block size suitable for vSphere I think you should check with the vendor to see what they recommend.

My VMware blog: www.rickardnobel.se
0 Kudos
FREDYz
Contributor
Contributor

Hi,

I am talking about IOPS on the storage level not on the vSphere. From the Storage controller to the disks, This seems to eb the bottleneck. I dont' really care about the IOPS on the ESXi server, that doesn't seem to be an issue.

Storage vendors are not very helpful sometimes so why I am seeking for advice here from people's experience.

When doing one of those operations it will read 1MB blocksizes from VMFS (and that's not what I am looking to chance). So for every 1MB read/write I want to make sure the storage will generate as less IOPS possible to the disks if that is being a bottleneck there, (therefore making more thoughput).

0 Kudos
mcowger
Immortal
Immortal

Doing so would possibly increase throughput a little (not much), but would very negativly impact the performance of the VMs. 

Say you created a native array stripe size of 128K to try to help with the copy process.  The copy process would go faster (maybe - i doubt much better than 5%), but now ALL IOs on your aray are a minimum of 128K, meaning you are increasing latency for everything.

Stick with a value that matches your application's average.

--Matt VCDX #52 blog.cowger.us
0 Kudos