VMware Cloud Community
mikerpalmer
Contributor
Contributor

VI3 and EMC Clariion set-up

I have searched around on this forum and have found a wealth of knowledge on VI3 setups. I am a little confused on the best way to set-up a CX600 for use with VI3. Here is my thoughts and would love to hear what others think.

I have a CX600 in a dev environment with one tray of 15 146GB FC drives. My question is about how to set-up the RAID. I have read that you should not put more than 10-15 VMs on a LUN. I also have RAID that you want your RAID 5 to be between 5-9 drives. Here is what I am thinking. Making 2 storage groups with RAID 5 61. Then doing a few 200GB metaLUNs across both RAID 5 61 storage groups.

Thoughts? I did not want to do a large RAID 5 (like a 13+1) because it's not recommended (rebuild times, etc) but wanted to get as many spindles (hence the metaLUN). Is it acceptable to create a bunch of smaller metaLUNs across the storage group instead of one large metaLUN. Any performance issues?

Thanks!

0 Kudos
10 Replies
boydd
Champion
Champion

Sounds good - I personally like 500-600GB LUNs. Use MRU.

DB

DB
0 Kudos
InsaneGeek
Enthusiast
Enthusiast

Your raid size is exactly what I do on my CX700. It works well because you can do 2x 6+1 raid groups plus a hot spare on a shelf. I do 4 way meta's instead of 2x way metas.

I create a number of smaller meta luns across storage groups, I standardized upon 500gb meta lun size (4x 125gb), each raid group is ~1.8TB (320gb 5400 rpm PATA drives) so I have multiple meta's in each raid group, but normally I don't have all of them going to the same host on the san.

For ATA this is about the max the SE's want to see, rebuild times on ours take a \*long* time, for fibre you can go larger... but I normally don't. My performance on those slower drives from a single raid group is ~31MB/sec, when I meta I get >180MB.

0 Kudos
AMcCreath
Commander
Commander

Hi Mike,

When you say that you have 1 tray of drives, do these include the FLARE (Vault) disks?

If so do not use them unless you absolutely have to.

If you choose to use these then you will need to be aware that FLARE strips off a few GB across the top of these disks, and any disks you add to this RG will also be stripped of that capacity. Also there will be performance impact across these disks.

apart from that your config is sound, 61 is the norm for capacity, with 1 Hot Spare at the end. If you need performance then move to a 41 configuration.

All the best,

Andy

0 Kudos
mikerpalmer
Contributor
Contributor

Thanks for the information, it is very helpful! I have a couple of follow-up questions.

1. Just to clarify, there is no real performance hit by running multiple LUNs on a single storage group? From what I have read is that VMWare can have performance issues with one large LUN because of how VMWare addresses the space (SCSI reservations?) so smaller LUNs are better (if someone can point me in the direction as to why that would be great). I just want to make sure the multiple metaLUNs per storage group is the way to go.

2. Yes, I am using the same tray and drives that contain the FLAIR OS. I knew about the space, it appears it does not use that much but was not sure on the performance hit. How much of a performance hit will I see using the FLAIR OS drives in a storage group?

Thanks again!!

0 Kudos
InsaneGeek
Enthusiast
Enthusiast

That in itself doesn't have an impact, compared if you use up the entire raid group. But there is a performance impact the more you move your drive head, the more you use your drive the more seeks you do to go to ther outter to inner parts of the drive. The impact would be if you do not use much of the drive, if you don't use much of the drive you'll tend to have less head movement, with multiple luns in the same raidgroup you'll jump over unused space to get the next lun (but I've not seen this be an issue)

I've not seen much of a performance hit on those drives, as they are primarily used for booting the SP's, and logging. When the array is running the impact is very minimal, the SE's prefer not to have anything else on them as then a raid rebuild takes longer and they get "twitchy" about have a dual-disk failure in the boot raidgroup which would be bad.

0 Kudos
epping
Expert
Expert

the performance impact of using the vault drives is dependent to how much I/O is being thrown at the disks.

i would not recommend putting a large, exchange,sql or Oracle instance on these drives but it would be OK for many other jobs.

the only way to work it out is to calcuate the I/O, on a FC disk EMC recommend 180 i/o ps, workout what u need and then devide it by 180 and that gives u the spindle count required.

0 Kudos
mikerpalmer
Contributor
Contributor

Thanks!

0 Kudos
kreischl
Enthusiast
Enthusiast

Your raid size is exactly what I do on my CX700. It

works well because you can do 2x 6+1 raid groups plus

a hot spare on a shelf.

I'm confused by this. What is the 1 for in the 61? The hot spare? You mention the hotspare for the shelf.

thanks

EDIT - Never mind. The +1 is the parity drive. Six usable +1 parity x 2 = 14 drives plus last drive (hot spare) is 15 drives to a drawer.

I need a whiskey. Smiley Happy

Message was edited by:

kreischl

0 Kudos
roundorange
Contributor
Contributor

Hi guys,

Just wondering if "meta luns" are applicable only to EMC CX series? or does it come with the EMC AX series and/or other SANs like IBM DS series?

TIA!

0 Kudos
jurajfox
Enthusiast
Enthusiast

I have found that metaLUNSs significantly improve performance in an ESX environment so this is the right approach. Once you get to 24-32 spindles you'll notice the performance increase.

Splitting up the VMDK files based on drive letter on different RAID groups works well also.

0 Kudos