cebomholt
Enthusiast
Enthusiast

iSCSI vendors vs "enterprise storage"

So I've seen a few of these threads hanging around, thought I would try to shed some insight to my scenario (the answer is always "it depends", right?)-

Currently have ~40 VM's on a three host 4.0u1 cluster, all low resource usage and low IO. I'm looking to consolidate another 80-90 servers into the cluster, and will for the most part keep the same pattern of low resource/low IO. On the storage side, we're moving away from an old FC san and probably moving to a full iSCSI solution. (likely lefthand or equallogic)

My question is this: From all of the io requirement/storage throughput calculations I've pulled, the iSCSI gear will be more than capable of handling the environment... Still, everytime I talk to a traditional FC vendor, their response is something to the effect of: "you need a big-boy SAN to support your enterprise". How much of this is legitimate? And how much is just sales people being "salesy"? I have yet to hear a decent technical argument on why the traditional products are so much more scalable and "enterprise" than the iSCSI vendors we're looking at. And honestly, the pricepoint and feature set that comes along with this gear makes it very much worth considering... Not to mention the gear appears to scale quite nicely

So it's an enterprise, and I need to make sure I don't back myself into a corner here in terms of growth and replication... I'd like to hear some experiences/thoughts on this and/or similar situations?

0 Kudos
12 Replies
sketchy00
Hot Shot
Hot Shot

You might get some very passionate responses about this. I'm going to stay away from that, but will just say this. The feedback that you are getting from the FC vendors should be totally expected. Of course they are saying that, as they don't want to lose you to iSCSI. No vendor is going to be objective on this. You might even find the same thing with end users out here. They like whatever technology they decided to go with.

Both approaches are fine. I'm a very satisfied that I went the way of iSCSI (Equallogic). The decision will ultimately be yours. Looking for battle tested/bet-the-business solutions, but also acknowledging trends in the industry that drive what ultimately will be the best long term decision for you.

cebomholt
Enthusiast
Enthusiast

Thanks, it's always helpful to hear someone else who was approached the situation similarly and is satisfied... Mind me asking about the size and io/throughput requirements of your environment?

Of course, any other thoughts are also welcome

0 Kudos
sketchy00
Hot Shot
Hot Shot

Currently we have one PS5000 array (16, 7200RPM SATA 1TB drives as RAID50). I just ordered another array for offsite replication at an external site. And we'll be adding an additional array internally late this year or early next year to accomodate capacity planning. We run a mix of high and low load VM's (50 right now, probably double that in a year). Exchange, SQL, source code control, etc, as well as development compiler machines. The Equallogic SAN HQ software (all their software is included) gives a nice handle on actual throughput and IOPS stats. I show that at least for my environement, 15,000 SAS drives would have been total overkill.

I can provide more data offline if you'd like.

0 Kudos
depping
Leadership
Leadership

I normally wouldn't recommend SATA but iSCSI is no issue at all. I have large enterprise customers running on iSCSI or NFS for that matter. As long as the array is scales/sized accordingly to the workload there is no reason not to go with iSCSI or NFS.



Duncan

VMware Communities User Moderator | VCP | VCDX

-


Now available: <a href="http://www.amazon.com/gp/product/1439263450?ie=UTF8&tag=yellowbricks-20&linkCode=as2&camp=1789&creative=9325&creativeASIN=1439263450">Paper - vSphere 4.0 Quick Start Guide (via amazon.com)</a> | <a href="http://www.lulu.com/product/download/vsphere-40-quick-start-guide/6169778">PDF (via lulu.com)</a>

Blogging: http://www.yellow-bricks.com | Twitter: http://www.twitter.com/DuncanYB

0 Kudos
RussellCorey
Hot Shot
Hot Shot

I have 2 customers that come to mind. Both have 1200-1500 user exchange databases and ~150-240 VMs running on a couple 1 gig iSCSI links.

One customer is using a NetApp FAS3020 and the other is using an old EMC celerra of which the model number escapes me. Keep an eye on your workloads and just be ready to add 1 gig links when you need them. Going to bet good odds you'll run out of disk performance before you hit a wall on the network side.

FC does scale better than most but by the same token you don't need to use a C-130 air transport to move ikea furniture 5 miles down the road when a u-haul van will do the job with room to spare.

Gather up all your requirements (IO requirements both current and projected; replication requirements; do you need point-in-time snaps, etc.) and find a product that suits your needs. Lefthand will likely suit your needs and has the plus of just scaling out as you add more units/shelves. I'm less familiar with EQL solutions but I have a couple friends running them in production with some success.

Make sure you buy enough of the right disks. Gigabit ethernet is suitable plumbing for an org your size; especially since you can always roll out MPIO and utilize more uplinks without having to go to 10 gig.

0 Kudos
Josh26
Virtuoso
Virtuoso

There may be a view that Lefthand and similar solutions already are "Enterprise".

Sure, you can be more enterprise with FC. There are certainly still reasons I would push it. 80-90 VMs is a lot, but it's all context dependant. I could imagine 90 linux boxes running Apache having lower needs than a single large Exchange server for example.

But it's important to separate things like a mirrored cluster of two Lefthand SANs from sticking some low priced service on a Windows server and calling it a SAN. That part is definitely valid - if you have more than a lab, you'll be shooting yourself in the foot by avoiding the "big boys".

0 Kudos
cebomholt
Enthusiast
Enthusiast

Thanks all for the input- Josh26, would you mind elaborating on what types of limitations I will run into by avoiding the traditional FC vendors? And as far as software SAN's, the only real consideration i've given is to use lefthand VSA's for replication to a DR site.

0 Kudos
Josh26
Virtuoso
Virtuoso

Thanks all for the input- Josh26, would you mind elaborating on what types of limitations I will run into by avoiding the traditional FC vendors? And as far as software SAN's, the only real consideration i've given is to use lefthand VSA's for replication to a DR site.

There are dozens of software iSCSI vendors - and doubtless, one of them will chime in shortly, where you face the fact that your SAN is no more highly available than the Windows OS it's running on. Argueably, significantly less available than an individual ESXi installation.

Edit: Again, my point is that although something like Lefthand is still essentially a software iSCSI SAN, it's in a significantly different boat (closer to "Enterprise") then some of the Windows services out there.

0 Kudos
PaulSvirin
Expert
Expert

IMHO, the most important is the storage performance. It you configure storage for best performance there is no big difference what you use - FC or iSCSI.

And it is known that if the storage is configured correctly it it possible to run even 70+ VMs using NFS.

---

iSCSI SAN software

--- iSCSI SAN software http://www.starwindsoftware.com
0 Kudos
Josh26
Virtuoso
Virtuoso

Hi,

Whilst there are certainly scenarios where iSCSI or NFS are appropriate for any number of VMs, suggesting there is no performance difference between iSCSI and FC, well, that's a sales pitch.

0 Kudos
malaysiavm
Expert
Expert

you should consider the unified storage today if you have the choice to choose.

The storage support ISCSI, FC, FCoE,NFS and CIFS out of the box. Choose which protocol you think issuitable for your environment.

VMware support the major protocol of the storage vendor today

Craig

vExpert 2009

Malaysia VMware Communities -

Craig vExpert 2009 & 2010 Netapp NCIE, NCDA 8.0.1 Malaysia VMware Communities - http://www.malaysiavm.com
0 Kudos

"Sales pitch"? I think it is hard to believe but it is not Smiley Happy

Yes, if you will spent a lot you will get a lot, but you should pay really a lot, I mean A LOT, for FC, to performance increased dramatically, comparing to Enterprise-level iSCSI SAN solutions. As you know Intel used iSCSI SAN software in tests, and they get 1000000 IOPS! Her, take a look:

http://download.intel.com/support/network/sb/inteliscsiwp.pdf

Regards

iSCSI Software Support Department

http://www.starwindsoftware.com

Kind Regards, Anatoly Vilchinsky
0 Kudos