I've run into issues where larger blocksize is necessary for some VM's, so the obvious question is what the drawback to larger blocksize is. There seems to be not a definitive consensus, plus the fact that vmware sets to 1mb by default.
Is there a good reason not to just make all my datastores 8mb?
Hello.
Is there a good reason not to just make all my datastores 8mb?
None that I know of, with this one exception.
Good Luck!
I set all datastores to an 8MB blocksize for almost all implementations.
There used to be a performance issues which is why you can change the block size, but for most implementations this is no longer an issue.
Whatever you choose, make it the same across the board, having VMFS partitions with different block sizes will only cause you pain. For example;
If you have disks provisioned as EagerZeroedThick on a datastore with a 2MB block size and do a cold migration of this server to a datastore with a 4MB block size it will provision the disks as Thick instead of EZT. Cold migrations between datastores of the same block size does not have this problem. This is one of many issues that you will face if you do not choose a standard and stick to it.
Regards,
Paul
See also: http://communities.vmware.com/docs/DOC-11920
Usually block size is not related to performance.
Andre
I am with vmroyale on this one.
I always go with 8mb blocksizes - as this allows max size datastores - so I am effectively 'future proofing' myself.
I am also in favor of consistent block sizes across my estate.
Ok thanks guys, this all makes sense. Except for the fact that why is it even an option (let alone default) if 1mb has only drawbacks and no advantages. Will chalk this up as vmware not always knowing best.
1mb has only drawbacks and no advantages.
With VMFS3 yes.
Maybe something will change in VMFS5. But we must wait for product release to have official info about it.
Andre