VMware Cloud Community
Wabun
Enthusiast
Enthusiast
Jump to solution

Strip size in Raid Controller with ESXi servers - RAID 10

I know this has been asked several times,but I have never seen any definitive answer to this question.

In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ] I can set the Strip size to get the required Stripe size by multiplying this with the amount of drives.

VMFS5 is using a 1MB block size, so with my 4 drives configuration, I would need to set my Strip size to 256KB, for my 8 drives it would be 128KB.

When I create a new VM I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore, I believe this would be the optimal situation in regards the datastore?

Would the OS [Linux] in the VM undo this theory?

Any advise or feedback?

1 Solution

Accepted Solutions
JarryG
Expert
Expert
Jump to solution

I have never seen any definitive answer to this question.

Probably because there is no definite answer to this question.

In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ]

You can win more performance when you switch your old sata/3gbit controller for sata/6gbit one.

VMFS5 is using a 1MB block size

True, but vmfs5 is also using 8kB sub-blocks. Moreover, very small files (<1kB) can be stored in meta-data.

I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore

Where did you get idea *this* helps to avoid fragmentation???

I believe this would be the optimal situation in regards the datastore?

I do not believe this would be optimal, and I'm not sure there is some generally optimal value. Setting strip-size equal to vmfs5 block-size might not be the best option, because it is quite small. Moreover, there are other things you should consider, i.e.:

- hard-drive sector size (mostly 512B or 4kB)

- ssd read/erase block size (highly vendor-specific)

- VM filesystem sector/block size (depends on OS)

- VM filesystem load (depends on what your VM is doing)

- size of your raid-controller's on-board cache

- type of raid-array (0,1,10,5,6,...)

- etc, etc

IMHO, if you do not have time for testing, just stick with default value. With blind shot you can just make things worse...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉

View solution in original post

Reply
0 Kudos
4 Replies
JarryG
Expert
Expert
Jump to solution

I have never seen any definitive answer to this question.

Probably because there is no definite answer to this question.

In my raid controllers [ LSI-8704EM2 / LSI-8708EM2 ]

You can win more performance when you switch your old sata/3gbit controller for sata/6gbit one.

VMFS5 is using a 1MB block size

True, but vmfs5 is also using 8kB sub-blocks. Moreover, very small files (<1kB) can be stored in meta-data.

I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore

Where did you get idea *this* helps to avoid fragmentation???

I believe this would be the optimal situation in regards the datastore?

I do not believe this would be optimal, and I'm not sure there is some generally optimal value. Setting strip-size equal to vmfs5 block-size might not be the best option, because it is quite small. Moreover, there are other things you should consider, i.e.:

- hard-drive sector size (mostly 512B or 4kB)

- ssd read/erase block size (highly vendor-specific)

- VM filesystem sector/block size (depends on OS)

- VM filesystem load (depends on what your VM is doing)

- size of your raid-controller's on-board cache

- type of raid-array (0,1,10,5,6,...)

- etc, etc

IMHO, if you do not have time for testing, just stick with default value. With blind shot you can just make things worse...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
Reply
0 Kudos
Wabun
Enthusiast
Enthusiast
Jump to solution

Hiya,

thanks ever so much for your reply, much appreciated.

If we initialize the raid aren't all blocks [Strip] on the disks that size? no matter how VMware or other OS within the VM uses it?

I would need to do some real world testing like you said.

Reply
0 Kudos
continuum
Immortal
Immortal
Jump to solution

>> I would use the Thick Provision Eager Zeroed to avoid fragmentation within the datastore

> Where did you get idea *this* helps to avoid fragmentation???

Thick provisioned vmdks do indeed help to avoid fragmented datastores. With thick provisioned you usually get 1 upto a few hundred fragments. Thin provisioned vmdks often consist of ten thousands of fragments.


________________________________________________
Do you need support with a VMFS recovery problem ? - send a message via skype "sanbarrow"
I do not support Workstation 16 at this time ...

JarryG
Expert
Expert
Jump to solution

Clear, but I was wondering about that "eager zeroed" part...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
Reply
0 Kudos