VMware Cloud Community
fotd
Contributor
Contributor
Jump to solution

Clariion LUN organization and DR questions

I'm in the process of buying two new EMC clariion cx3-10c's. One will be Fibre channel in my production site and the other will be ISCSI at our disaster recovery site. My plan is to replicate the production to the DR site and use SRM (more on that in question #2). I'm going SATA and ISCSI in the DR site for cost savings, I already have an existing SAN fabric in production the FC will plug into. I'm running a 5 server cluster in production with about 35-40 VM's, mostly file servers, web servers, simple app servers. Nothing very intensive, just normal infrastructure type things, file/print, WSUS, Symantec, Coldfusion web servers, etc. My SAN experience is fairly limited, I've worked with a few small SAN's on my own, but in most situations I've always worked in large places with dedicated Storage teams.

Prod. San= 21 400GB FC disks + 5 400GB FC disks for SAN OS and "light IO use"

DR Dan = 10 1TB SATA disks +5 1TB SATA disks for SAN OS and "light IO use" (I know disk wise this is overkill, but I just filled the shelf up to help offload disk IO for the raid sets.)

question #1: Should I go 4x 5disk raid5 sets or some type of metaLUN? I was thinking 4x 5disk raid sets, which should give me about 1500MB per raid set. Then 1x 5disk Raid5 and 1x 4disk Raid5 for the DR SAN (splitting the copied LUNS between them)

#1a. Is it a bad idea to go with 2 LUNS per raid set or should I stick with one 1500MB? I'm thinking two 750 VMFS volumes per raid set carved into two LUNS. I've had a hard time finding info on IO hits for creating 2 LUNS per raid set, I know I should only do 1 VMFS per LUN. Just wondering if I'm setting myself up for failure or should I lower my Raid set site, ie. 5x 4disk.

question #2: I'm purchasing MIrrorview /A, since my DR site currently doesn't have the bandwidth to handle a synchronous SAN. According to EMC they will have an SRM connector for MIrrorview /A by the end of the year. I'm hoping they're not just trying to pacify me, it seems to me a Asynchronous SRM plugin would need to be a lot more complicated and I could see it never coming out. Just curious if anybody has any opinion about it, am I crazy for thinking this? BTW I'll be using Mirrorview /A with snapshots.

question #3 Any glaring flaws in my plan? I realize I'm still looking at crash consistent copies which is a little worrisome. I only have a 12-24 hour expectancy for data, I'm not really worried about up to the second. Our core business system has a different DR plan already in place with a DB cluster.

Tags (4)
Reply
0 Kudos
1 Solution

Accepted Solutions
sdd
Enthusiast
Enthusiast
Jump to solution

Hi,

I agree that you are on the right track here. I am not sure how far along you are now, but the advice you have been given looks pretty good. On your CX3-10's there are a couple performance best practices to keep in mind.

First, make sure that you still design for I/O the same way you would in physical. I.e. make sure that you have enough spindles for the workload. From what you have stated, you look to be doing this. The second one is to try to keep LUNs to 400-500GB's each for best performance. Larger LUNs will work, but as mentioned earlier, you will get better performance due to FS queueing on more smaller LUNs.

On the availability of the SRA for MV/A, you will see it before the end of the year.

Thanks,

Scott

Disclaimer: I am an EMC Employee

-Scott Disclaimer: I am an EMC Employee

View solution in original post

Reply
0 Kudos
4 Replies
lorimer
Contributor
Contributor
Jump to solution

If you use the first 5 disks for anything I/O intensive, it will affect all of your SAN performance. The SAN runs embedded XP, and the XP OS is what is stored on those disks.

Your SAN should come with best practices for RAIDset sizes. I vaguely recall EMC saying something about splitting the RAIDsets up in groups of 2 trays. 1 tray with 9/6 disk sets, the other with 9/5 + a hot spare (don't quote me on this, ask your technical sales guy, he will know). I also know that depending on disk type/size their recommendations change. Definitely worth talking to them.

Once you have your RAIDsets, you can create metaLUNs that span RAIDsets, or export all/part of a RAIDset to the ESX cluster. The rule of thumb that I try to follow is: No more than 16 guests per LUN. Because of that, I export 256G LUNs for my small OS disks (Tier 2 storage). For bigger disks (Tier 3 bulk storage), I create 500+G LUNs. It depends on how large of vmdk files you plan on creating/using. Doing additional LUNs per RAIDset will have benefits. Each LUN has a separate command queue. More LUNs can actually increase your performance over a single LUN per RAIDset.

As for question 2: I am debating on very similar questions myself, so I'm not much help. Sorry.

Matt

fotd
Contributor
Contributor
Jump to solution

  • "If you use the first 5 disks for anything I/O intensive, it will affect all of your SAN performance. The SAN runs embedded XP, and the XP OS is what is stored on those disks."

Ya I'm only planning on storing static data there, maybe iso's, templates, etc.

  • "ask your technical sales guy, he will know). I also know that depending on disk type/size their recommendations change. Definitely worth talking to them."

Just want to see what other people think. I trust EMC, I've just been around long enough to know your typical sales/install guy doesn't always give the best advice.

  • "Doing additional LUNs per RAIDset will have benefits. Each LUN has a separate command queue. More LUNs can actually increase your performance over a single LUN per RAIDset."

Really? I would think increasing the LUNs per RAIDset would cause it to work harder and could have potential impact. But I'm really only talking about maybe running 20 vm's per RAIDset split into 2 LUNS. Like I said, I don't know to much about SANs, just the basics. Thanks for the comments!

Reply
0 Kudos
sdd
Enthusiast
Enthusiast
Jump to solution

Hi,

I agree that you are on the right track here. I am not sure how far along you are now, but the advice you have been given looks pretty good. On your CX3-10's there are a couple performance best practices to keep in mind.

First, make sure that you still design for I/O the same way you would in physical. I.e. make sure that you have enough spindles for the workload. From what you have stated, you look to be doing this. The second one is to try to keep LUNs to 400-500GB's each for best performance. Larger LUNs will work, but as mentioned earlier, you will get better performance due to FS queueing on more smaller LUNs.

On the availability of the SRA for MV/A, you will see it before the end of the year.

Thanks,

Scott

Disclaimer: I am an EMC Employee

-Scott Disclaimer: I am an EMC Employee
Reply
0 Kudos
fotd
Contributor
Contributor
Jump to solution

Thanks for the response. I should be receiving the equipment in about 2 weeks. We bought the field install, so I'm sure the install tech will help me get it all ironed out. Just wanted to make sure I was heading down the right path. The SRM with MV/A is good news, I haven't heard anything on it in the past month or two.

Reply
0 Kudos