- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I know this has been asked several times,but I have never seen any definitive answer to this question.
In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ] I can set the Strip size to get the required Stripe size by multiplying this with the amount of drives.
VMFS5 is using a 1MB block size, so with my 4 drives configuration, I would need to set my Strip size to 256KB, for my 8 drives it would be 128KB.
When I create a new VM I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore, I believe this would be the optimal situation in regards the datastore?
Would the OS [Linux] in the VM undo this theory?
Any advise or feedback?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have never seen any definitive answer to this question.
Probably because there is no definite answer to this question.
In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ]
You can win more performance when you switch your old sata/3gbit controller for sata/6gbit one.
VMFS5 is using a 1MB block size
True, but vmfs5 is also using 8kB sub-blocks. Moreover, very small files (<1kB) can be stored in meta-data.
I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore
Where did you get idea *this* helps to avoid fragmentation???
I believe this would be the optimal situation in regards the datastore?
I do not believe this would be optimal, and I'm not sure there is some generally optimal value. Setting strip-size equal to vmfs5 block-size might not be the best option, because it is quite small. Moreover, there are other things you should consider, i.e.:
- hard-drive sector size (mostly 512B or 4kB)
- ssd read/erase block size (highly vendor-specific)
- VM filesystem sector/block size (depends on OS)
- VM filesystem load (depends on what your VM is doing)
- size of your raid-controller's on-board cache
- type of raid-array (0,1,10,5,6,...)
- etc, etc
IMHO, if you do not have time for testing, just stick with default value. With blind shot you can just make things worse...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hiya,
thanks ever so much for your reply, much appreciated.
If we initialize the raid aren't all blocks [Strip] on the disks that size? no matter how VMware or other OS within the VM uses it?
I would need to do some real world testing like you said.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>> I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore
> Where did you get idea *this* helps to avoid fragmentation???
Thick provisioned vmdks do indeed help to avoid fragmented datastores. With thick provisioned you usually get 1 upto a few hundred fragments. Thin provisioned vmdks often consist of ten thousands of fragments.
________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Clear, but I was wondering about that "eager zeroed" part...