VMware Cloud Community
1905
Contributor
Contributor

Mission: Find the perfect SAN for our datacenter

Hi

I have a delicious problem, to find the perfect SAN for our needs.

Enviroment:

vSphere 4.1

300 VM´s.

We need 2 SAN´s located at different geographic locations (10GB connection between, 15km, < 2ms). 60 TB of storage on each location and the possibility to vMotion every virtual machine in case of planned maintenance or for disaster recovery.

IOPS need: ~ 20.000 IOPS.

Players we are looking at:

Compellent, EMC VNX, HP P4500 (lefthand),  HP 3PAR, HDS Ams2500, (Netapp was in scope but was removed)

Personal notes:

Tiering in compellent looks awesome.

EMC VNX looks like a solid solution

HP P4500 with network raid is unbelievable cheap. But it feels like a SMB solution and i´m not sure the iSCSI is the way to go. I haven´t found any installation with 14 + 14 p4500 nodes in a multi-site.

Would be great to get some input from you guys.

Thanks

Reply
0 Kudos
17 Replies
JohnADCO
Expert
Expert

To me Reldata has the coolest solutions at the best cost.   But they are new movers and shakers in the higher end storage busness right now. So you would have to be willing to "take a chance" on them.

Worth a look for anybody, because they allow unlimited storage and also support highly untegrated continued use of most anybody's thrid party storage.

We had a test unit in for a month last year, extremely impressive.   Of course I am biased now, because I am ready to sign the deal with them.

Reply
0 Kudos
admin
Immortal
Immortal

I'd go with something tried and true. Sure the newcomers are cheap but they don't have the proven base of the more popular arrays. Buy the fastest you can. Something you might want to check out while weighing your options is to check the VMware Knowledge Base for various models you are looking into. You may find tidbits of information about some of your candidates you didn't know. You should be able to get performance data from each vendor about the number of IOPS per spindle in their arrays as well as the IOPS performance with the chosen storage processors with given amounts of cache as well.

While I can't recommend any particular vendor or model over another I do know that we (VMware) work really closely with the larger vendors. With all that said I run a DLink DNS-323 with the NFS addon at home for my tinkertoy environment. Small and completely unsupported. Smiley Happy

Reply
0 Kudos
mcowger
Immortal
Immortal

Disclaimer: I work for the 3PAR side of HP

20,000 IOPs and 60TB with replication is right in our wheel house - very easy to achieve.  Creating a few dozen LUNs, mapping them to a dozen hosts and setting up replication between them would involve about 1 hour of effort on your part.

We fully support VAAI, tremendous performance (we are currently the record holder for performance on SPC-1), incredible ease of use, top notch (I would argue better than compellant) automatic sub-LUN tiering and, of all the arrays you list, the only one with with true active/active access to volume.

There is a reason that 7 of the top 10 hosting service providers (most of which use VMware) run their service on 3PAR (as opposed to VNX, Compellant or others).

If you want more information, feel free to email me and I can get you whatever you need and a demo or something Smiley Happyvcdx52@hp.com

Regards

Matt Cowger (VCDX 52).

--Matt VCDX #52 blog.cowger.us
Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

You've got way too much infrastructure to be investing in a new up and commer, regardless of what any single person's experience with them is. I would second your feelings on Lefthand. They may be "tried and true" but this is a much larger installation than 99% of its userbase has.

depping
Leadership
Leadership


We need 2 SAN´s located at different geographic locations (10GB connection between, 15km, < 2ms). 60 TB of storage on each location and the possibility to vMotion every virtual machine in case of planned maintenance or for disaster recovery.

IOPS need: ~ 20.000 IOPS.

That requirement by itself will limit the options you have as there are just a few arrays today that can offer those capabilities in a supported manner. Why did NetApp drop off? I don't have a preference either way, but they seem to do well together with Equallogic and EMC when it comes to multi-site clusters.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

Reply
0 Kudos
JohnADCO
Expert
Expert

Definetly a daunting task for a "Reldata" to break into the market.   My only suggestion was they should be looked at.

They really are not a cheap solution either by any means.   They do have quite a few installations of this size though.

After having their product in for extensive testing?  I am banking on them being the next big thing for sure.

I figure I got three possibilities here...

A.  They take off and are the next big, so I look like a genius.

B.  They hang around and I get full life of the product, so it looks like I made sound decision.

C.  They go under and my choice looks bad / ill-concieved.  Smiley Happy

Reply
0 Kudos
Roggy
Contributor
Contributor

3pars F400 at each site will work will especially with there remote copy function you can replicate between sites each night.

Plus it supports Site recovery manager if your DR requires it

Reply
0 Kudos
depping
Leadership
Leadership

Keep in mind that if it is Campus Cluster there are only 3 configs officially certified for HA in a single cluster as far as I know:

- EMC VPLEX

- NetApp Metro

- Lefthand

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

1905
Contributor
Contributor

Hey guys. Been on vacation.

Thanks for the replys.

Feels like lefthand is out of scope after your responses. 3PAR should be the solution for us if we would choose HP as vendor.

Netapp was removed becasuse the resealler/partner wasn´t good enough.

Compellent is still a strong card. The downside is the lack of synchronous replication. The function will maybe arrive soon but we can´t be sure.

Does anybody has any thoughts or experience about comepellent?

Reply
0 Kudos
andershansendk
Contributor
Contributor

Regarding netapp: Couldent you just find an other reseller?

Blog: www.vperformance.org
Reply
0 Kudos
1905
Contributor
Contributor

Of course we could. But this is a big deal including servers, network, storage, support and implementation so the netapp resellers wasn´t good enough on everything.

Reply
0 Kudos
1905
Contributor
Contributor

Do you have any more information which arrays that can offer those capabilities.

I don´t know if we will use one cluster. DRS between sites are the main feature we wan´t for planned maintenance. HA is nice to have. But in a future disaster we think that SRM will be enough for us.

Reply
0 Kudos
logiboy123
Expert
Expert

You are at the borderline situation where you could use a LeftHand, but you are getting a bit to big for it.

There is a reason why HP paid a lot of money for both LeftHand and 3PAR; they are truly awesome. Easy to install, easy to manage, easy to maintain. I do not work for HP by the way and I've worked on pretty much all the major storage platforms.

FC: you only need it if you have an extreme latency requirements. Don't get me wrong, FC is a better solution all round, but it's expensive to install and expensive to manage. For most clients it is very hard to justify the cost when they could get iSCSI for cheaper and buy more servers.

Regards,

Paul

Reply
0 Kudos
1905
Contributor
Contributor

Your answer is like we suspect. We are having a hard time finding customer cases.

Lefthand looks great now but what if we increase IOPS and TB every year. And what will happen if we starts to use a new CRM-system or something more demanding.

iSCSI is simple and easy to set up. We are worried if the iSCSI-network will be able to manage the traffic. I have seen some forum threads about buffer, jumbo frames etc that sounds frightening.

We are using FC today so the experience and knowledge is present in our organization.

Reply
0 Kudos
depping
Leadership
Leadership

I haven't seen many people swamping an iSCSI array or hitting limitations around the speed of the network. You can have multiple links to the array or go for 10Gbe. On top of that it will come down to the amount of spindles / cache etc. So depending on the type of workload you will need to factor that in. But I would recommend to contact a storage vendor, let them assess the current environment and come up with a recommendation.

Duncan

Yellow-bricks.com | HA/DRS technical deepdive - the ebook!

Reply
0 Kudos
admin
Immortal
Immortal

very unlikely you will push the limits of a properly designed and scaled iscsi san infrastructure. try to max out a single 1gb iscsi based connection with regular workload...now expand that across multiple links, controllers, servers, nic's, etc. you get my point. iscsi easily keeps up with of most todays of fc's workloads. with 10gb becoming more marketable there should really be no concerns.

i noticed equallogic was not on your list. any particular reason? we are right around the 40tb mark. thats a cry from 60 i realize, but a couple 6500x 10,000k based sumo's or even 48 or 96tb 6500e sata based with some 6000x's or xv's for higher performance workloads like we do to teir your data would also be a solution that might be worth investigating. just wanted to throw that out there.

Reply
0 Kudos
tonyholland007
Contributor
Contributor

Hello I will let you know that I work for DELL | Compellent, and our SAN solution would fit all of your needs.  Once you get involved into the sales cycle and you would like to talk VMWare or use cases for Live Volume technology with VMware I would be happy to talk to you.  Hopefully you find the right solution for your needs and good luck.  Hopefully we will talk in the future.

Tony

Reply
0 Kudos