I have never seen any definitive answer to this question.
Probably because there is no definite answer to this question.
In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ]
You can wi...
See more...
I have never seen any definitive answer to this question.
Probably because there is no definite answer to this question.
In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ]
You can win more performance when you switch your old sata/3gbit controller for sata/6gbit one.
VMFS5 is using a 1MB block size
True, but vmfs5 is also using 8kB sub-blocks. Moreover, very small files (<1kB) can be stored in meta-data.
I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore
Where did you get idea *this* helps to avoid fragmentation???
I believe this would be the optimal situation in regards the datastore?
I do not believe this would be optimal, and I'm not sure there is some generally optimal value. Setting strip-size equal to vmfs5 block-size might not be the best option, because it is quite small. Moreover, there are other things you should consider, i.e.: - hard-drive sector size (mostly 512B or 4kB) - ssd read/erase block size (highly vendor-specific) - VM filesystem sector/block size (depends on OS) - VM filesystem load (depends on what your VM is doing) - size of your raid-controller's on-board cache - type of raid-array (0,1,10,5,6,...) - etc, etc IMHO, if you do not have time for testing, just stick with default value. With blind shot you can just make things worse...