VMware Cloud Community
PerryM
Contributor
Contributor

EMC Clariion CX600, ESX 3.5, Lun strategy and setup

We are about to deploy a EMC CX600 with 5 ESX servers.

Our CX600 has 4 shelves of 36gig Fibre channel drives, and 4 shelves of 320GIG Sata drives. Obviously, Metaluns are the way to go.

I want to set this up right, and have been reading the forums and my plan is to set up multiple 41 raidsets on the Sata drives, and multiple 81 raidsets on the fibre drives. I'm not sure how large to make the Luns though. i.e. should I create 8 250GB LUNS, and then present them as a 2TB volume?. For the Sata drives, I'd like to start off with 4 Luns presented to the ESX servers with as close to 2TB each. Our main file server will need at least 4TB. Initially, we will only be deploying a Windows 2008 server with a role as fileserver using DFS. Can anyone give their suggestions on the metalun configuration? CXSANGUY? You out there?

Thanks

Perry M

Reply
0 Kudos
7 Replies
azn2kew
Champion
Champion

We carve our LUNs between 400-600GB is a sweet spot you can increase if you want but wouldn't want larger than 1TB. Storing too many VMDK on ESX is not recommended and should be between 12-16 and of course varies on server functions. Read the attach CX600 best practice guide on page 50 for details

I'm not sure about Windows 2008 because we're not using it in production.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems Inc.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
kjb007
Immortal
Immortal

I would not recommend a vmfs datastore that large just for one vm. A better practice in this case would be to use an RDM and attach that to your vm instead.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
PerryM
Contributor
Contributor

This is what I was going to do:

  • Create a 100GB LUN for the OS on the Fibre Channel drives (LUN ID 30)

  • Create a 1.8 TB LUN (sata) (LUN ID 100)

  • Create a 1.4 TB LUN (Sata) (LUN ID 101)

Present these 3 LUNS to the Guest Win2008 server.

This particular server has the following requirements:

Operating system drive: (C:) 60GB

Data volumes/Drive letter assignments:

K: (300GB),H: 900GB, M: 900GB, P: (100GB), I: (250GB), O: (150GB), R: (500GB)

Drive H: & M: would be stored on LUN ID 100 (on two 900GB VMDK's), and the rest on LUN 101.

Do you recommend that I create seperate VMDK's for each logical drive, or just create one large VMDK per LUN, and partition it through windows?

I set up 6 raid groups, of 4+1, and have created a 1.8TB metalun (300GB luns x 6) to host the two 900GB K: & M: logical drives, and another metalun (250 GB x 6) for the rest.

-pm

Reply
0 Kudos
kjb007
Immortal
Immortal

Firstly, I would create separate vmdk's. If in the future you need to expand the storage, with the disk partitioned, it will be difficult to increase anything other than the last partition on the disk. If you create 3 on one disk, then the first 2 will not be able to expanded, unless you get some partitioning product to move the partition start and end points around, which comes with its own set of risks. So, for those volumes, I would recommend separate vmdk's.

Second, I would make up the MetaLun using slightly larger individual LUNs. Maybe 6 x 300 GB LUN to make up the MetaLun, and still split it across those 3 RAID groups. Since you're drives are 320GB each, I would try to get as close to the drive size for my individual LUNs as possible, so as not to stripe multiple times across the same spindle.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
sdd
Enthusiast
Enthusiast

Perry,

Do you have any idea what performance characteristics you will be looking for? What other applications do you plan to deploy on the CX? Usually, I have seen better luck designing to the guest performance needs and scale than an arbitrary size. This includes smaller LUNs for VMFS performance often and raid group layouts that meet the I/O requirements for the combined guests.

Luckily, as you look forward, even if you misconfigure, you can always use the LUN migration capability to change what you have configured. The only requirements are that you have enough free space to create a LUN of the same size to migrate the LUN to, and that you are on Flare 16 or newer.

Regards,

-Scott Disclaimer: I am an EMC Employee
Reply
0 Kudos
PerryM
Contributor
Contributor

Most of our VM's will be file servers, however, we do plan to implement a sharepoint and SQL server. We also plan to implement VDI and host virtual XP workstations.

I'm at a point now where I know how I am am going to implement my ATA storage.

I now have 4 DAE (shelves) of 36 gig drives that I need to define for hosting the VM's OS drives.

I was considering creating 4 raidgroups of 8+1, which would give me ~ 856 gig total. I was gonna split that into 4 Metaluns of 214 GIG each. Each Metalun would host from 2 to 8 VM's depending on their sizes.

My biggest concern with using metaluns though, is that if we lost one of the raid groups (i.e. 2 disks in one raidgroup failed at same time), that we would lose EVERYTHING on the metaluns! and it would take down all servers until we were able to restore a backup of things. I'm planning to use some of the Clariion features like SanCopy / remote mirrors, snapshots, etc, so that we have good backups. I have not yet gone down that road of how DR would work in the scenario. I definately want to make sure though, that we will be able to get back on line quickly tho.

Reply
0 Kudos
sdd
Enthusiast
Enthusiast

Sounds like you are definitely looking at all the right things. 4 Raid groups of 8+1 with MetaLuns is definitely a good layout for what you are looking at. If you do that, you still have 24 drives left for other uses (2 of those should be hot spares). When you look at SQL, some of those can be a good space to allocate some RAID 1/0 for logs and RAID5 or 1/0 for DB (look at the read/write ratio to see if you will get any benefit from RAID 1/0)

You are right that a metalun striped across multiple raid groups can be lost if one raid group is lost. This recovery time could be improved in a couple ways. One way is certainly leveraging a replicated copy via tools like Mirrorview or Sancopy. Another option is using a clone (snapshots are on the same disks and therefore don't protect against mutliple disk failures) in the array to a different set of drives if you have Snapview. Make sure if you do one of these that you use consistency groups to make sure that all underlying volumes for a given guest get split at the same time allowing crash consistent recovery.

Regards,

-Scott

-Scott Disclaimer: I am an EMC Employee
Reply
0 Kudos