1 person found this helpful
Well, you'll probably get 100 different responses, but with one common thought: do you want performance or capacity? or a mixture of both?
My recommendation for performance is RAID-10. My recommendation for capacity with performance is RAID-50.
The server is being used to consolidate a number of physical servers that are being used in a small communications room for demonstration purposes so will have very light load on them. Previously there have been issues with maintenance and older machines failing without people noticing so I was looking for redundancy and the idea of a hotswap to make sure the RAID rebuild was automatic.
Just to confirm, each array will be seen as one LUN to ESX ?
1 person found this helpful
We really only recommend RAID 1, 10 or 50.. (where possible)
RAID 1 for your ESX OS.
RAID 10 or 50 for your VMFS (VM's)
We have always been a fan of RAID 10, I was looking at the P400 specs and it says it does support Raid 10. However with an 8 disk Raid 10 array you don't have the hot spare which is also nice to have.
Would 146GB not be wasted for just the ESX OS ?
In his scenario there would be some extra space but you could have an extra VMFS volume on that first mirror to not waste the space. All of my stand-alone Hosts (non-SAN attached) have the drive configuration:
2x 72GB RAID 1 - ESX OS and SWAP
4x (or 6x in the G5 case) 146GB RAID 10 - VMFS
My SAN attached are only 2x 72GB for OS and Swap. I do not usually create the VMFS on local storage in that case.
Hope this helps some.
Oh and to answer an earlier question each Logical Drive is seen as a LUN in ESX. If you have more than one Logical per array you will see multiple LUN's for the local SCSI "hba".
Message was edited by: JonT
What is the limit on the number of VM vmdk's you can have per VMFS, 32 ?
So 3 drives in an array is seen as one Logical Drive and thus one LUN or as 3 LUNs for the physical drives ?
Message was edited by:
No real limits as such, but you have to be aware of what your apps are doing etc..
Putting many VMs into a single LUN will give you SCSI Locking(reservation) issues..
So better to create multiple LUNs and share out the load..
Thanks for the info.
Your welcome, hope it helps.
acr is absolutely right about the SCSI locks. The number of VMDK's per "LUN" will vary depending on a lot of things.
Your 3 drives are in one array, but depending on your physical controller you can have 1 logical drive for that array or several. If you want to avoid the SCSI lock possibility for that same 3 drive array, you could consider creating 2 or more "logical drives" which would be seen as separate "LUN's" within ESX.
Finally something I can answer somewhat intelligently :-}
Typically RAID-6 is used with SATA drives, you have SAS, which are more reliable, so save the extra drive.
If you are doing high i/o, then put as many drives into one volume as you can, more drives mean higher i/o. Less drives means less i/o.
I would carve 1 RAID volume set, ie RAID-10 or RAID-5, across maximum drives, then carve out our luns/volumes as you need them. That way your data stripes are across the maximum amount of drives while the individual luns/vols are what you need.
My best advice, experiement, see if you notice a performance difference.
And learn from my spelling mistakes as well :-}
experiement should be "experiment"
I have decided to try and amend the order for the server before it is built to replace two of the 146GB drives with 73GB 15K RPM drives.
I will then go for RAID 10 with the two 15K drives and a RAID 5 array with 5 disks and one hot swap spare. ESX OS and swap will go on the RAID 10 array and the VMFS on the RAID 5.