RAID10 means that you have a stripeset consisting of two internal RAID-1 arrays. One RAID-Array consists of two disks and has a capacity of 146GB, the whole stripeset has a capacity of 2*146GB=292GB.
The CLARIION has its own internal OS which uses about 26GB, therefore the result is 292-26GB=266GB. It's even less, because manufacturer usually calculate 1GB=1000KB instead of 1024KB.
What's the benefit to use RAID10? -> A RAID-5 array can have one disk failure without loosing any data, a RAID10 with 4 disks can loose two disks (not in the same RAID1-array). So it's more reliable.
However you waste more space in comparison with a RAID-5 (3*146GB=438GB-26GB OS=408GB).
Well, you have to decide which is more important to you: reliability or disk space. If you want to know how to calculate the MTTDL_DF (Mean Time To Data Loss due to Disk Failure), have a look at following link .
Hope this helps.
With a RAID10 write IO is better than with RAID5 because there is no delay to calculacte and write the parity bit. However 4 disks are to few to really get a "good" performance (>Gbit).
I think they said anything over 4 disk is bad performance. 4 disk is optimal. the reason they do not want to created a larger lun that 266. is that true?
Not in general, no. The particular array they may be using may have performance characteristics such that a 4-disk set is good. However, most non-virtual (traditional RAID) arrays can construct a set of anywhere from 4-5 to 12-16 disks, and some up to 28. Then there are the virtualizing arrays which can stripe across hundreds of disks for outstanding performance.
It sounds like the array they are using is old and doesn't allow much in the way of re-configuration when bottlenecks are observed.
I have to agree with the other comments. I have my VMware cluster connected to a 1 TB LUN on a CLARIION and have not experienced any performance issues. That LUN currently supports multiple VMs and I anticipate adding additional VMs to the LUN.
I'll have to see if I can find the article but, EMC's optimum LUN size for Vmware is 250GB. This has to SCSI command queue size or something like that. ESX has one command queue for each LUN. The larger the LUN, the more VMs you can have on there. The more VMs, the higher chance of filling up the comamd queue. However, this was for ESX v3.0. Some of this may have changed in 3.5. But, I've seen no performance difference between a 250GB LUN and an 800GB LUN.
On a Clariion a LUN can be spread across hundreds of disks if the performance is needed using MetaLuns across multiple raid groups. For performance, the main difference between RAID 1 (or 1/0) and RAID 5 is in heavy write environments. If you are doing all read activity, there is no parity penalty, therefore each I/O is satisfied equally, however, writes in a RAID 1 volume require 2 I/O's to disk (one per mirror), and writes in a RAID 5 volume require a read of data and parity, then the write of data and parity, therefore 4 I/O's to disk. Depending on the I/O characteristics, some applications run much faster on RAID 1 or 1/0 and some are almost all read, and don't get a big gain out of the advantages of a RAID 1 layout.