VMware Cloud Community
KyNetGuy
Enthusiast
Enthusiast

SAN Selection and Life Expectancy

I was wanting to get a little information from people about their SANs and replacement schedules.

If you consider a lower end CX series fiber channel SAN from EMC can run over $100K with a very modest capacity, and they usually warranty them for 3 or 4 years, this can certainly drive TCO of VMWare through the roof.  

For those of you who have been virtualized for 5 or more years or who have used SANs for 5 or more years, what life expectancy do you plan for with the purchase of a SAN?  My finance department hates to see me coming come budget time.

The cost to maintenance EMCs at year 4 is outrageous, when you consider the actual likelihood of failures.  I am afraid to even consider a 5th year.

My thoughts on EMC is they are like Cisco, no one has ever been fired for buying either.  They work.  Everyone supports them.  They are both tier 1 providers.  I did look at HP SANs, but I was not impressed with their disk usage and how they differentiated different raid types based on how they striped the data on the disk.  The whole "one pool of disks" thing just didn't make sense.  (and prob doesn't for us EMC types)

Except when new technology comes out like 8Gb FC, 10Gb iSCSI or SSD drives the upgrade, if you have capacity and IOPS to spare, how long do you expect your SAN to last?  Give me some general feedback and thoughts.

0 Kudos
2 Replies
golddiggie
Champion
Champion

For larger organizations, EMC arrays make sense since they have some features that make them geared more towards them. Things like 'phone home' for drives that are going south (included in the cost) and such. But, if you're looking to save costs, then you'll either need to look at the lower cost level EMC devices, or consider going to one of the more budget friendly array makers. I've had excellent results with EqualLogic arrays. Hey offer a wide range of drive/spindle options for the arrays, with attractive prices.

I've seen organizations use arrays from more than one manufacturer too. Such as EMC arrays for some things, but then EqualLogic/HP arrays for other things.

One thing you could look into would be to 'charge-back' the different departments that have files/VM's on the arrays. Have them pay at least part of the costs associated with using those arrays. That usually makes them more aware of how using more space on the array comes out to increased costs... Even if the IT department pays for the initial hardware, any additional disc space needed for supporting groups should be paid for by those groups. Also have them pay their share of the maintenance costs.

Chances are, a solid array will last you 5-10 years before you actually want/need to change it. Even then, it could be migrated to more Tier 2 tasks/uses. Also, with drive technology changing as fast as it has been lately, and costs per TB dropping, you should be able to increase your capacity without having the CFO breathing down your neck. What you paid for 10TB two years ago should get you 50TB (or more) this year. I usually try to project data growth/bloat about 2 years ahead, getting as close to that capacity as I can get the budget approved for.

Also, I think the way EqualLogic organizes the arrays makes more sense than how EMC does it. You can build a single array from 16+ spindles, using RAID 50 having high IOPs, and redundancy with ease... You then carve up the space into LUNs sized as you need/wish for the task.

KyNetGuy
Enthusiast
Enthusiast

golddiggie wrote:

Also, I think the way EqualLogic organizes the arrays makes more sense than how EMC does it. You can build a single array from 16+ spindles, using RAID 50 having high IOPs, and redundancy with ease... You then carve up the space into LUNs sized as you need/wish for the task.

Thanks for the the input.

I have looked at RAID 50, but not quite sure.

If I am in complete understanding you have 2 RAID 5 striped.  So, your capacity loss to parity is the same as 2 separate RAID 5 arrays, which would be 2 disks worth.  So that is fine.  No drawback there.  Would be the same as in the EMC world of running two 7+1 R5 arrays.

You would get read performance increase similar to R0.  You should get improved write performance, but not quite as good as R0 due to parity overhead of the 2 R5 segments.

What I don't like about it goes back to the same reason I didn't like HP EVA's.  All spindles being used for every LUN.  While this potentially is a boost, you can have a hard hitting Exchange/SQL/Oracle server, or a couple even, then those drag down the available I/O and IOPS for all other LUNS.  Though more expensive, there is merit to having dedicated spindles/storage groups for apps.  A lot of that is a shot in the dark though.  (EMC Speak) Do I split my DAEs into multiple smaller R5 arrays to spreak out disk contention, or make larger R5 arrays?  It's always a hard call and a lot of guess work and hypothesis as to which is going to be best, unless you are migrating and have known I/O and IOP figures which can quantify your decision.

HP EVA's use all spindles.  They determine RAID level based on how said data is striped.  There are no dedicated spindles for R1, R5, R1/0, etc.  I would be really interested in a NON-sales person opinion who has ran REALLY LARGE enviorments on both types of configurations.  Unfortunately those people are far between and NO vendor give honest answers.  They only stress the points that make them better.  (I know,its their job)

0 Kudos