VMware Cloud Community
zjayz
Contributor
Contributor

Recommended Disk/Raid setup

Hi I just after a bit of advise on the raid setup for the following:gb

Server ML 350 g6
16x 300 10k sas disks
P410i smart array controller with 1gb write back cache
32gb Ram
Server is going to run 3 virtual servers on esxi 5
SBS2011 premium – 25 users
SQL Server – not under a lot of load think the db is under 1gb
Terminal server 5 users just using a third party app

Now my question is the raid setup, which of the following would be best
1. 1 big raid 10 with hot spares
2. Mirrors for each OS – 4 disk raid 10 for exchange db/logs and another for sql db/logs
3. Mirrors for OS - Mirrors for DB  - Mirrors for logs

Or would you recommend something different ?
Thanks for you help

0 Kudos
14 Replies
golddiggie
Champion
Champion

I would install ESXi5 to an USB flash drive 1-8GB in size and then  carve up the drives into RAID 10 arrays as needed. Depending on what the  storage needs are for the VM's, size the arrays accordingly. I wouldn't  make a dedicated LUN/RAID array for the OS drives and such, that's a  fallover from the days of physical servers.

Determine what you'll want for a share on the SBS2011  server, and make sure the LUN is sized to meet that need. The terminal  server won't need much for storage resources, so I wouldn't even worry  about that. Figure out what you'll need for the SQL server though. If  it's only going to have light to medium loads, then you could have one  group with four drives (RAID 10) and the balance in another RAID 10  array. Keep in mind, the more spindles you have in the group/array, the  higher your IOPS will be. Higher IOPS is a good/great thing. If you go  with hot spares, then make it one RAID 10 array and LUN, just configure  it to be under the maximum size allowed. Basically, using two hot spares, in RAID 10 will get you about 1957GB for use.

Personally, I'm using a SAN for my home lab, as well  as at work (EMC storage at work, QNAP at home). We configure the drives  into arrays depending on what we need for capacity, while keeping an eye  on which RAID level we need. If we need higher performance, then RAID  10 is used. If we don't, then it's typically RAID 5. At home, I have the  array as a single RAID 5 group, with LUNs carved up from that presented  over iSCSI. I have several 512GB LUNs, with a couple of smaller ones  (for templates and/or ISO file storage).

If/when you  decide you want to leverage more of what you get with running  ESXi/vSphere, you'll want another host. At that point, you'll really  want to move to using a SAN for all your VM's and associated files  (ISO's and such). IMO, it's best to start thinking along those lines and  see about getting funds for it next budget year/refresh.

0 Kudos
zjayz
Contributor
Contributor

Cheers forgot to mention I already have the usb drive for the esxi install. Disk space isnt going to be an issue at all as they are moving from SBS 2003 with a 75gb database limit and im sure the sql database's and data come to no more the 100gb. They didnt go for the san because they didnt want to pay the extra and from what I know the san is more viable if you need high availablity (and other vsphere tricks) unless you can tell me there's other benefits to using a san with esxi 5 free version ?
Also what san would you recommend for a setup like this, Ive looked at freenas and openfiler (linux) but I wouldnt dare put these in a production environment and the cost of a decent san seems quite high.

But from what your saying I think 1 raid 10 with 14 disks and 2 hot spares is the way I will go

0 Kudos
hidoasu
Contributor
Contributor

In my opinion, the best way to do is:

1. Raid 1 for OS ESXi (2 disk) - Install ESXi, rest space for .iso image OS + application disk.

2. 12 disk for Single RAID 5 (storage for .vmdk) + 2 hot spare global (For raid 1 and raid 5)

That's all

0 Kudos
zjayz
Contributor
Contributor

Surely if there no disk space considerations raid 10 is always going to be preferred over raid 5 because of write speeds and if a disk fails raid 5 takes a big peformance hit because it rebuilds from paritys while raid 10 doesn't take much of a performance hit
Also ESXI install will be on the usb drive
Thanks

0 Kudos
MartinPasquier
Contributor
Contributor

I totally agree with the last answer, If I had the same hardware, I'll do one RAID 1 for VMWARE and all of the rest as RAID5 or RAID6 with one or two hotspare disks.

It offers you one single level of storage management at esxi console and you'll probably be happy the day you have to expand volumes to not recreate all the volumes under the RAID controller (backups and so on).

And a RAID5 or RAID6 is more efficient (performances) if it is configured with a lot of disk, in regard of 3 or 4 disks only.

0 Kudos
Slingsh0t
Enthusiast
Enthusiast

My 2 cents:

- 2 Disks RAID 1 for ESXi (if the USB fails and the hypervisor goes with it, what happens to your VMs?).

- 12 Disk RAID 10 with 2 global hot spares.

0 Kudos
zjayz
Contributor
Contributor

if the usb goes dont you just stick another one in, run the installer and it will pick everything it needs back up? I pulled one out before and the vms continued to run 

0 Kudos
golddiggie
Champion
Champion

If your VM's are on the RAID array, then nothing happens with them. You'll just need to install ESXi5 onto a new flash drive, if the one you're using fails. IMO, if you use a quality make/model of USB flash drive, then you really don't need to worry about it. This is why I wouldn't go cheap on this part. Better to spend a few dollars more, for a quality make/model and KNOW it will have more than enough run time.

Basically, it's the same as if you had ESXi5 on the dual drive array and you lost both drives. You'll have a small window of down time while you replace the drives and install ESXi5 again, but the VM's will remain. You'll just need to re-add them to the inventory for the host. IMO, a minor item. Of course, the VM's will stop running when the media ESXi is installed on not functioning, but that's the same either way.

If you're really concerned about uptime, then DON'T place VM's on DAS, and don't go with just one host. Have at least a pair of hosts and put the VM's on SAN LUNs. Then it won't matter if you have a host go down, or into maintenance mode. All the VM's (or the ones you care about) will be picked up by the other host.

Personally, I have zero concerns about the flash drive working for the long term. Even IF it does go sour, I'll be able to go to a local store and pick up a replacement in under 30 minutes. I'll have ESXi5 installed, and configured within a short amount of time after that. I plan to eventually get a second host server, at which point it will be even less of a concern to me.

0 Kudos
hidoasu
Contributor
Contributor

Hi,

Maybe you not understand what I mean.

1. Im not recommend install on USB, in this case, if the USB failed, I don't care.

2. Raid 1 for 2 HDD 300GB, install ESXi on it, rest space servese .iso image. Right?

Hope this help

Tks.

0 Kudos
hidoasu
Contributor
Contributor

Yep,

I agree with you using raid 10 if your application have high I/O. In this case, I think raid 5 is better for redundant and not waste HDD like raid 10.

HIDOKAI

http://hidoasu.wordpress.com

0 Kudos
JESX35
Enthusiast
Enthusiast

You have a few configuration options available to you and it really depends on space vs performance, so you will need to figure out which is more important to you and what you require performance wise, but here is some idea's

Option 1

2 x 300 Drives - ESX5i OS - RAID 1 Mirror

14 x 300 Drives RAID 10 Datastore  - 2.1TB

Option 2

2 x 300 Drives - ESX5i OS - RAID 1 Mirror

4 x 300 Drives RAID 10 Datastore - 600GB - SBS Server

10x 300 Drives RAID 10 Datastore - 1.5TB - SQL server / Terminal Server

Option 3

2 x 300 Drives - ESX5i OS - RAID 1 Mirror

4 x 300 Drives RAID 10 Datastore - 600GB - SBS Server

10x 300 Drives RAID 5 Datastore - 2.7TB SQL Server / Terminal Server

You could probably even switch it up for another 3-4 options based of space vs performance.  Just remeber raid 10 will get you the best performance but least amount of space and RAID 5 will get you the most space but the slowest performance.  With that said RAID 5 is really good at reads so if a majority of your IO is reads RAID 5 does really well, however where RAID 5 eventually slows down is when it writes.  It has to write the parity stripe at some point which slows it down slightly.  RAID 10 can read / write as much as you want with no to little performance hit.

Some people would even say you could use the 2 drives i've allocated as a mirror for the OS into a logical drive to save some of that space as the raid controller can do this.  However I have ran into issues when a drive fails and the logical drive wouldn't rebuild so I usally just avoid that possibility and dedicate 2 smaller drives to the OS

Hope this helps

0 Kudos
zjayz
Contributor
Contributor

I beleave they have ordered a £50 hp vmware approved usb drive so it should be fine for esxi. But couldnt the iso's just be placed in the main datastore (raid 10 with 14 disks and 2 hot spares) as they will have no more than 20gb of iso's ? .

I've attached a pdf I found with raid performance looks like raid 5 wins on some charts but over 12 disks raid 10 looks to be the best option apart from 64kb sequential read performance


I'm not disagreeing with anyone by the way, I'm sure the server will be fine on either raid, its just something thats always played on my mind the main worry was weather or not to have 1 raid for all vms which i know now

0 Kudos
golddiggie
Champion
Champion

Just be aware of the penalty when using RAID 5 with less drives (<5-6). IMO, the better performance, coupled with higher redundancy, makes RAID 10 a solid choice in your case. I would do all the drives (with the hot spares) in your setup that way.

0 Kudos
JESX35
Enthusiast
Enthusiast

Yes you can make an ISO folder on any Datastore you want.  You could also create a partition for this but I see no need for it in smaller enivorments.  It won't put that much of a load on the disk subsystem as the only time you will be using the ISO's is during installation anyhow.

0 Kudos