VMware Cloud Community
PatrickWE
Contributor
Contributor

Raid Read & Write policy for ESX 3.5

Just wondering, creating a new vmfs partition on an ESX 3.5 server after adding some disks...

On a DELL PowerEdge 2900 / PERC 5/I RAID controller with SAS disks, the Raid Read policy, can be either No Read Ahead, Read Ahead or Adaptive Read ahead.

For the write policy, my choices are : Write trough, write back, force write back...

What would be the best choice Read & Write policy on a RAID5 with 3 disks...

Thanks !!!

0 Kudos
7 Replies
mcowger
Immortal
Immortal

Read ahead I would set at Adaptive.

Does your RAID controller have a battery backed cache? If so, select write back. If not, select write through.

If--Matt

--Matt VCDX #52 blog.cowger.us
PatrickWE
Contributor
Contributor

I think the raid controller has a battery, but will check..

In ESX 3.5, when creating the new VMFS partition, what should be the Maximum file size/block size, choices are :

256 GB, Block size : 1MB

512 GB, Block size : 2MB

1024 GB, Block size : 4MB

2048 GB, Block size : 8MB

wonder what is the default and how would check my exisiting vmfs partitions already in there...

0 Kudos
RParker
Immortal
Immortal

Well I disagree, I found a post (not sure where), but I remember reading that read ahead only applies to small random files. Since ESX uses LARGE VMDK files, a read ahead won't work, and it can actually kill performance somewhat.

One thing I need to mention is that this assumes your VM's are running locally to the machine. If you have a SAN, NONE of this will have ANY affect. The SAN has it's own cache controller, not affected by the PE internal card.

Turn read ahead off, and set cache to write through, not write back. That's how I have ALL my PE 2950/R900 machine's. Also turn off the CPU features for caching also. Using write back is more secure.

Write-Back. When using write-back caching, the controller sends a write-request completion signal as soon as the data is in the controller cache but has not yet been written to disk. Write-back caching may provide improved performance since subsequent read requests can more quickly retrieve data from the controller cache than they could from the disk. Write-back caching also entails a data security risk, however, since a system failure could prevent the data from being written to disk even though the controller has sent a write-request completion signal. In this case, data may be lost. Other applications may also experience problems when taking actions that assume the data is available on the disk.

Write-Through. When using write-through caching, the controller sends a write-request completion signal only after the data is written to the disk. Write-through caching provides better data security than write-back caching, since the system assumes the data is available only after it has been safely written to the disk.

0 Kudos
RParker
Immortal
Immortal

The block size is pretty much relevant only for large files. If you don't need single VMDK files over 256G, then you don't need more than a 1 Meg block size.

0 Kudos
RParker
Immortal
Immortal

I found it, and this is one of many. Performance testing on PE 2950 with read-ahead adaptive / write back is slower performance than no-read ahead (read caching still active) and write through.

This site I found he did similar I/O tests and found the same on a different RAID controller.

0 Kudos
mcmcomput
Contributor
Contributor

Has anyone else tested this? I tested it on my 2950s today which has 6x 300gb 15kRPM drives in a raid10 configuration. With read/write turned off I got more consistent results of aroudn 125MB/sec write & 198Mbs read. With it turned on, I got a more mixed results varying from +/- 15%.

I tested 50 & 100Mb file test with Mpower.

Can controllers actually help ESX by caching since the chunks are generally 4-8Mb? Thats what its caching, right? compared to running native, it caches files of difference sizes from 8kb to a few Mbs.

Attached are my results on my servers using IOmeter w/ 4kb @ 100% 0% Random.

OS is windows 2k3 R2 clean install.

0 Kudos
BUGCHK
Commander
Commander

> Can controllers actually help ESX by caching since the chunks are generally 4-8Mb?

If the question is: does the VMkernel do VMFS I/Os in multiple of the VMFS blocksize?

The answer is: NO - I've really looked into this some time ago by using the array's performance utility and saw all kind of I/O sizes down to a single block.

0 Kudos