stratolynne
Contributor
Contributor

Storage (raid sets, luns, etc)

Setting up VI3 with 14 ESX hosts and an EMC CX320F (15 300GB FC and 45 1TB SATA II drives). Raid 5 and VMotion will be used. The first phase of the project is migration of 25 Windows 2000 or 2003 servers.

I’m looking for ideas (best practices) for setting up the RAID Sets and Luns. Any one out there have some experience to share. What is an appropriate size for the RAID sets and how many Luns?

Appreciating any ideas.

Message was edited by: tom howarth Edited to removed Word XML markup from post

0 Kudos
11 Replies
weinstein5
Immortal
Immortal

Keep in mind the largest LUM Accessible by ESX/ESXi is 2 TB - high you will want to make sur ethe load is balanced across the LUNs - check out for additional information

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
taits
Enthusiast
Enthusiast

A lot also depends on what services your Windows servers are providing and the size of the data. i.e. will you be virtualising Exchange? how many of your servers are database servers with separated quorum, voting, and data disks? have you gathered any utilisation metrics with regards to IO. Usually the storage vendor has a set of recommended best prac for RAID sets and LUNs with regards to virtualisation, and each ISV has a set of recommendations for best prac for their app on storage. I think the trick is to try and balance performance with ease of manageability. The more LUNs, the harder to manage and the reduction in benefits provided by DRS. Depending on the amount of data, speed of disk, number of spindles per array, type of data, I usually start with the premise of maximising the number of spindles per array, and then LUNs of 500GB presented to be formatted as VMFS. If your using DRS, the initial placement feature will spread the load out evenly across the datastores.

Cheers

Lofty
Enthusiast
Enthusiast

stratolynne,

Basically it all depends on your workloads that your going to be virtualising.

Obviously for starters I'd be looking to put the higher IO loads onto the FC disks and the lower onto the SATA. For example, any databases, Exchange logs etc onto the FC disks.

In term of LUN sizes and RAID groups etc, I'd be trying to limit the number of LUNs per RAID group. I normally try to work at a 1:1 relationship. This way, you can balance the load on the storage processors by trespassing LUNs.

And use a single VMFS volume per LUN.

With sizing, look at running 20 - 30 VM's per LUN and try to mix up the workload as much as possible (within the constraints of high IO on FC etc).

So, you may want to look at something like -

5 x 300GB FC in 1 x RAID5 LUN = 1.2TB (VMFS_FC_Store01)

5 x 300GB FC in 1 x RAID5 LUN = 1.2TB (VMFS_FC_Store02)

5 x 300GB FC in 1 x RAID5 LUN = 1.2TB (VMFS_FC_Store03)\

(I wouldn't go any lower than 5 disks per RAID5 set as you're wasting disk, but any larger and you might start to load up with >40VMs per LUN and contention issues could occur)

Then you could break the SATA shelves up the same way. Then balance the LUNs across the storage processors.

Again, it all depends on the workloads you're trying to virtualise

Hope this helps

Thanks

Lofty

PS. If you found my response helpful, think about awarding some points.

Thanks Lofty PS. If you found my response helpful, think about awarding some points.
0 Kudos
Rich-Ontai
Enthusiast
Enthusiast

Keep your backup strategy in mind as well, and offset that with the technical expertise of your staff. How much do you want / need them to interface with the SAN? In some cases it's better to keep them away from the SAN to prevent user error from disrupting your environment.

If your going to be depending on snapshots for the reverting to backup, then you need one lun per disk / per VM. If you plan on using VCB only , you can put up to 10 VM's on a single datastore, but you'll find that performance is efffected to some degree. The issue is if you revert a lun from the SAN, then you'll be rolling backup several VM's if they're all on the same datastore. I'm assuming with the nicely spec'd equiptment you have, you're going to want to use the SAN features for backups and you'lll need one lun per VM disk. Here's a tip, make the C drive lun on your windows disks BIGGER than the size of your template to account for swap file size of the VM. Otherwsie the VM will not start after you first cloan it.

Definaly break down log file and data storage out for Exchange / SQL / ect.. as suggested above. Use seperate disks for those and make sure they're on FC disks.

Do you know what kind of backup you plan to use?

0 Kudos
Rich-Ontai
Enthusiast
Enthusiast

=-sorry for the dupe post-=

0 Kudos
proden20
Hot Shot
Hot Shot

Try to size for a maximum of 15-18 VM's per LUN. Heavy I/O could decrease that count. You've got a 2TB max VMFS (without extents). More spindles = better performance, but I've heard conflicting interests regarding more than one LUN per RAID set. Low I/O VM's can go on the SATAs. Your vMotion won't be an issue because you are using fibre channel (as long as all the hosts in your cluster are zoned that is.) I've been told the "sweet spot" for VMFS vmdk size is about 256GB, but I'll double check that and post if different. Be sure to read the configuration maximums document for sizes and # of LUNs. Don't forget to align your guest partitions to prevent disk crossings.

0 Kudos
stratolynne
Contributor
Contributor

Just to clarify: one lun per raid set? As in don't break up a raid set into separate disk volumes (as would be seen in Windows). Have the raid set be one big volume and if a VM needs several disks each disk would be some part of a raid set?

0 Kudos
proden20
Hot Shot
Hot Shot

There is really alot to consider, but start with the size and I/O demand of your VM's. More spindles will improve your performance, but could be too expensive for only one LUN or VM. In my environment I'll take similarly sized (cpu/mem/storage, such as desktops) VM's and size the LUN for 15-18 of them with 30% overhead (for memory resizing and potential snapshots.)

Oracle, for instance, is much more intensive so that gets its own LUN and RAID set. You'll have to watch your LUNs at the SAN level and see what is creating demand, and may have to break LUN's or RAID groups, etc, as your environment comes to life. This is another reason SVMotion is your friend.

I personally have multiple LUNs on some RAID sets and don't really see an issue, but I wouldn't do that with high demand databases.

0 Kudos
steveharder
Contributor
Contributor

hi,

I'm going on to VM project at work as well and plan to get a SAN storage with 36x 300GB SAS HD for the VM.

I was planing to creat one raid 5 group out of the entire 36 HD's and then create the LUN's as needed.

Is this wrong? I though that more disks on raid is better for IO.

Thanks

0 Kudos
JohnADCO
Expert
Expert

Not considered best practice. You will have to stagger VM starting at a min if you do it that way.

Not really wrong as I do it that way most of the time, although I have never had that many spindles involved. Even with real hard hitting windows DB servers. With that many spindles involved I'd expect i the disk group to be able to handle lots of intensive I/O from lots of VM's and hosts.

The biggest fear is how long it will take your hot spare to build in when there is a drive failure. Until it builds in, you are exposed massively to the risk of another drive failure during the rebuild. Lose one more drive in that time and you loose all your Virtual Disks and VM's in one shot.

0 Kudos
AndreTheGiant
Immortal
Immortal

For the FC disk get it simple as suggest before:

4-5 disk in RAID5 for each RAID group and 1 single LUN (< 2TB) in each RAID group.

Use 1 disk for global host spare and put the first 5 disks (the system disk) in one RAID group.

For the SATA drive is up to you If you need space than use RAID5 (max 6-7 in each RAID group), if you want more performace than use RAID10.

As above-mentioned split the LUN between the two SP.

For storage group you have to decide how many cluster you want/need.

For example you you create 3 different VMware Cluster (455 ESX) you have to create 3 different storage group.

Andrea

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos