I have just ordered an HP DL380 G5 and it is equipped with the P400 RAID controller with the 512MB cache and battery backup.
Attached to that will be 8 x 146GB 10k SAS HDD and I am trying to work out what would be the most suitable approach to configuring the disks. I have briefly read the benefits of having one large VMFS partition and of having a small VMFS partition.
The options that I have come up with so far are:
7 disks in a RAID 5 array with 1 hot spare giving a total of 876GB storage
2 RAID 5 arrays of 4 disks with no hot spare giving 876GB of storage
1 RAID 5 array of 3 disks and one RAID 5 array of 4 disks with one hot spare giving 730GB of storage
Now as far as I am aware the controller supports RAID levels 0,1,01,5,50 and 6
Does any one have any other ideas for configurations or comments on what I should be considering?
Well, you'll probably get 100 different responses, but with one common thought: do you want performance or capacity? or a mixture of both?
My recommendation for performance is RAID-10. My recommendation for capacity with performance is RAID-50.
The server is being used to consolidate a number of physical servers that are being used in a small communications room for demonstration purposes so will have very light load on them. Previously there have been issues with maintenance and older machines failing without people noticing so I was looking for redundancy and the idea of a hotswap to make sure the RAID rebuild was automatic.
Just to confirm, each array will be seen as one LUN to ESX ?
We have always been a fan of RAID 10, I was looking at the P400 specs and it says it does support Raid 10. However with an 8 disk Raid 10 array you don't have the hot spare which is also nice to have.
In his scenario there would be some extra space but you could have an extra VMFS volume on that first mirror to not waste the space. All of my stand-alone Hosts (non-SAN attached) have the drive configuration:
2x 72GB RAID 1 - ESX OS and SWAP
4x (or 6x in the G5 case) 146GB RAID 10 - VMFS
My SAN attached are only 2x 72GB for OS and Swap. I do not usually create the VMFS on local storage in that case.
Hope this helps some.
Oh and to answer an earlier question each Logical Drive is seen as a LUN in ESX. If you have more than one Logical per array you will see multiple LUN's for the local SCSI "hba".
Message was edited by: JonT
What is the limit on the number of VM vmdk's you can have per VMFS, 32 ?
So 3 drives in an array is seen as one Logical Drive and thus one LUN or as 3 LUNs for the physical drives ?
Message was edited by:
No real limits as such, but you have to be aware of what your apps are doing etc..
Putting many VMs into a single LUN will give you SCSI Locking(reservation) issues..
So better to create multiple LUNs and share out the load..
acr is absolutely right about the SCSI locks. The number of VMDK's per "LUN" will vary depending on a lot of things.
Your 3 drives are in one array, but depending on your physical controller you can have 1 logical drive for that array or several. If you want to avoid the SCSI lock possibility for that same 3 drive array, you could consider creating 2 or more "logical drives" which would be seen as separate "LUN's" within ESX.
Finally something I can answer somewhat intelligently :-}
Typically RAID-6 is used with SATA drives, you have SAS, which are more reliable, so save the extra drive.
If you are doing high i/o, then put as many drives into one volume as you can, more drives mean higher i/o. Less drives means less i/o.
I would carve 1 RAID volume set, ie RAID-10 or RAID-5, across maximum drives, then carve out our luns/volumes as you need them. That way your data stripes are across the maximum amount of drives while the individual luns/vols are what you need.
My best advice, experiement, see if you notice a performance difference.
I have decided to try and amend the order for the server before it is built to replace two of the 146GB drives with 73GB 15K RPM drives.
I will then go for RAID 10 with the two 15K drives and a RAID 5 array with 5 disks and one hot swap spare. ESX OS and swap will go on the RAID 10 array and the VMFS on the RAID 5.
"I will then go for RAID 10 with the two 15K drives and a RAID 5 array with 5 disks and one hot swap spare."
This is an old post, but performance to dollar, (2) 75GB 15k SAS - ESXi Install (6) 146/300 15k SAS in RAID 1 + 0
Always works well for me, and when in a Virtual atmosphere, I generally buy refurb drives, and keep two extras, saves money- not much risk.