whatever disk (SSD or HDD) you used you need minimum raid level. In here you have two disk so you can configure raid 0 or 1. When created RAID 0 you have full capacity as usable both disks. But in RAID 1 it will reduce usable capacity rather than RAID 0. Best practice is RAID 1 and create small amount LUN for Esxi partition.
Data RAID depends on what you can afford and what you are looking for in terms of performance. Usually people go with a simple RAID-5 set if they have a limited number of VMs. For the Hypervisor (ESXi) there's no point in going with 2 SSDs in RAID. ESXi loads in to memory, as such the performance the SSDs provide you normally will not add much, only during boot and you don't do this regularly. So go for DUAL SD Cards, which is a lot cheaper and still provide resilience.
You should consider having two RAIDs technically, RAID1 is for OS(ESXi Host) and RAID5 with/without Hot Spare drive for Data LUNs of VMs as you are going to have OS as well as data on the same server. This is the best practice most of Engineers follows when they have single server to run OS as well as Data computes.
This helps in having great performance and better protection of your data.
If you have feasibility to combine SAS+SSDs for the RAID5 volume, that should be great as it may help you having maximum performance for read/write operations performed by VMs. my thought, In such single server environment OS is not required to have SSDs.
I can't offer a rule of thumb, but some thought you may consider for the design (some of them were already mentioned before):
- space requirements
- performance requirement (note that for performance you also need a controller with a BBU option)
- number of HDDs, and HDD sizes
Why HDD sizes matter? RAID1, and RAID5 can handle a single HDD failure, and needs a rebuild on a new HDD in order to be save again. For large HDDs (>1TB - my personal limit) a rebuild can take considerable time, and puts high load on the disks, so I'd rather consider RAID6 fo such disks. In all cases plan for a Hot-Spare disk.
Thanks for the feedback!
I will not be going with SSD. Just looking at something like 2SD cards mirrored for the esxi hypervisor and 4 sas (3 for RAID 3 and one for hot spare).
Thanks for the feedback a.p.
This is what I was thinking
2SD cards at RAID1
Three, 1TB sas drives (15k? perhaps)
I do like your suggestions about using RAID 6 (4 disks) + 1 hostspare. That would give me a 2 disk FT, and from what you are saying the rebuild time would not be an issue
I doubt that 2 SD cards can be used to create Raid1 mirror. Even if you will be able to do that ESXi will probably dont recognize such raid and you will see two single SD cards avaliable for installation.
You are not goiung to use VSAN (which requires really free disk) so why to use SD card? Just create desired RAID on the top of your disks and install ESXi on that logial volume (volume will be partitioned and you will loos few GBs from the total capacity but the rest will be used as a datastore).
As aleredy mentiond desired RAID level depends on required capacity and performance you are looking for. If you have few spare bucks I would look at Intel 3610 SSDs.
Depends on the server but certainly something Dell
and Cisco offer and arguably cleaner/cheaper than having
Modern servers (at least dell, hp, lenovo) has dual sd modules, which can use raid1 for 2 sd cards.
They are usually presented to the hypervisor as a usb-disk, so esxi installer has no problems detecting them.
Anyway, I agree, that if you are not going to use diskless server, there's no point getting the sd cards.
On a 5 disk server raid6 + spare is a bit of an overkill.
If you are planning to use 5 disks, I'd recommend raid5+spare (disk space) or raid10 + spare (for speed). Raid6+spare would be the most wastefull option imo, regarding both speed and disk space.
1 person found this helpful
for sure OS (ESXi) install on 2x SD cards.
The rest depends on VM needs. we usually using RAID10 (4x or 8x - 1.2TB SAS disks). if you dont need performance use RAID6.
tested on hundreds of single ESXi hosts around the world
Single ESXi hosts with local VMFS-storage using RAID5 seem to be that type of environment that run the highest risk of dataloss due to VMFS-cirruption problems.
Thats the lesson I learned in a few years of remote recovery work.
So I tell all my customers to avoid Raid5 at all costs.
Another design mistake I see a lot in small environments is the use of a single datastore only.
This produces big problems when that VMFS-volume needs maintenance.
So my recommendation for single hosts with local storage is to use datastores not larger than 2TB (anything much larger than that can no longer be evacuated during a night or weekend).
Another important tip to survive unexpected power failures without problems is to use eager zeroed thick provisioned vmdks for all VMs.
The more thin provisioning is used (thin vmdks and snapshots) the higher the risk to lose data during power failures.
1 person found this helpful
Raid 1 for Esxi os and Raid5 for data store.
O.k. I have seen a lot of weird configs, like you guys have 4 HDDs and you somehow combine them with 1 hotspare and 2x in RAID 0 and 1x in RAID 1, I am blabling right now. But its weird ? ..
Why not go the easiest way, Buy some 3rd party Raid Controller, IE Adaptec, Put 4 HDDs in it, and make it a RAID 5, you can hot swap them, Raid 5 has redundancy, unlike RAID 0, and you get boost in reading speeds, but unlike RAID 1, You will write slowly, but depends on what are you going to use that, Nowadays RAID Controllers has a 1 or 2 GB Cache, so a lot of stuff is usually handled pretty well and it manages the disk itself...
Never-ever-ever-ever in your life go with original MB Raids.. .Since they usually cut some money on that and you get really weird RAID setups.
RAID 5, with ability to Hot-Plug HDDs even in SATA, (3rd Party controllers can do that).
Youll be thanking me later when of those HDDs go woink.
Why I replied ? I was originaly searching which RAID adapter to use, that could implement itself into ESXi dashboard, so it will show me the HDD status. Well going to look elsewhere, See ya o/ And Happy uptime.
EDIT: Why are you guys putting ESXi OS in SSDs, SD cards etc. ? I would be cold-sweated every day, when that goes away Ill lose my config. So I"ve installed ESXi right on my RAID 5 HDD's, Its a logical volume, so what we should be doing is to create and maintain 1 logical volume, that should be alive, no matter what, Not create more of Logical Volumes. O.k. Boot time maybe quicker but still ?