VMware Cloud Community
MikaA
Contributor
Contributor

iSCSI Design - Multiple boxes vs. single one?

A design question for budget restrained..

We are designing a ESX environment, which will consist of 2-3 hosts 2 QC CPUs. We are leaning towards iSCSI/NFS storage for it but there are some more or less open questions where I was hoping to get some suggestions or opinions. I've been reading the forums and it seems that EqualLogic rules iSCSI, NetApp rules NFS and then there are some others. One of the others is Dell's MD3000i (SAS), which is dirt cheap, one gets three of those for the price of one PS100E (SATA).

So, the question is:

Performance wise: should we take the iSCSI route and given the ESX iSCSI limitations, would it be better to buy, say, two or three MD3000i's and use two HW HBAs to connect to those OR get one e.g. PS100E? Is EqualLogic three times better?

Reasoning here being, we can get 2 GigE bandwidth per host, 6 Gigs aggregate with three boxes versus 2 GigE with one, not to mention three times as much physical disks. I understand the bandwidth itself may not be that much of an issue in the end but the number of disks would be. Bandwidth benefit also basically disappears when 10 GigE HW HBAs become available.

Administration and management is a whole other story, no competition there. But the question is how much less performance - if any - a great management interface should justify?

Thanks for any input!

0 Kudos
17 Replies
christianZ
Champion
Champion

Remember the EQL price inludes all features (snapshots (r/w), volumes cloning, replication - only possible if 2 or more boxes, windows integration/Sql, Exchange, unlimited fw upgrade) - but I guess 3 MD3000i with sas (36disks) will give you better throughput than one PS100 (14 satas).

Unfortunately nobody has tested the MD3000i here: http://communities.vmware.com/thread/73745

0 Kudos
Milton21
Hot Shot
Hot Shot

As already stated. Equallogic has tones of features. If you dont plan on using them why pay for them.

0 Kudos
canadait
Hot Shot
Hot Shot

Have you taken a look at www.lefthandnetworks.com

Looks like a very intesting iscsi play

0 Kudos
jasonboche
Immortal
Immortal

Also have a look at:

http://www.vmware.com/resources/techresources/1006

Jason Boche

VMware Communities User Moderator

VCDX3 #34, VCDX4, VCDX5, VCAP4-DCA #14, VCAP4-DCD #35, VCAP5-DCD, VCPx4, vEXPERTx4, MCSEx3, MCSAx2, MCP, CCAx2, A+
0 Kudos
pasikarkkainen
Contributor
Contributor

Remember that the whole Equallogic architechture is quite different from normal SAN "boxes".. even if you have only one EQL array, you still create a "group" and manage and use that group via a single IP address.. when you get another new EQL array, you just join it to the group (online without interruptions), and you instantly have more usable space on your group.. and then the arrays start to automatically distribute the data online and on background between the arrays (servers don't see any interruptions), which means after some time all of the controllers and cache from all the arrays can be used to serve all the data (also existing data/volumes).. automatic loadbalancing without having to reconfigure servers/initiators (EQL uses iSCSI protocol redirections to loadbalance active sessions online). The point is that you can start with a single array and grow and add when you need more disk space or performance.

With each EQL array you get 3 more gige ports, more cache, more controller/RAID performance, more diskspace to your group..

If you mix SATA and SAS EQL arrays in the same group, EQL automatically loadbalances the volumes needing more IO to faster (=SAS) arrays.. you can of course configure these manually too, if you want.

So, the point was, if you get 2 or 3 "black boxes" management of the environment gets more difficult.. and scaling of performance and diskspace is more difficult and not automaticly optimized.

canadait
Hot Shot
Hot Shot

What happens if you have a group of Equallogic boxes and one of them fail?

0 Kudos
doubleH
Expert
Expert

from my understanding the volumes on the box that failed will be unavailable even though volumes could be load balanced between arrays. i brought this up to equallogic at one time, but when you think about their design it is designed for ultimate redundancy.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
0 Kudos
doubleH
Expert
Expert

nicely said.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
0 Kudos
cmanucy
Hot Shot
Hot Shot

To answer your original question, yes, Equallogic's single box IS better than many of the competitions' multiple boxes.

It does, however, introduce a single point of failure in the iSCSI infrastructure, which is why we chose LeftHand over EQ. But that was because LH fit our needs - YMMV.

One other error in your assumptions - you're not going to get 'aggrigate' bandwidth on iSCSI by tying multiple GigE ports together. You can certainly have them there for redundancy, but don't think you're going to be able to bond them on the ESX side.

That being said, it really doesn't matter much. Your switches will make a bigger impact (don't buy cheap switches for iSCSI! They WILL choke!) The array's abiliy to spew data to the ESX hosts is MUCH more important and will net you MUCH better performance than a 2GiB link. You're not going to fill even 1GiB unless you're streaming the hell out of something anyways - since most ESX systems compromise a bunch of random I/O, spindles and RPM matter.

Again, YYMV - it depends on what you're planning on running on your ESX farm, and what your expectations are.

And oh, don't let Dell sell you a GigE switch.

---- Carter Manucy
MikaA
Contributor
Contributor

Heh, I already thought to myself "talk about a happy EQL user" but then you are not.. Smiley Happy

I read somewhere and thought about getting 2 GigE bandwidth by using one dual-port (or 2 sport) HW HBA but not for redundancy but for another link to another storage box. Yes, that would not be very wise redundancy-wise but it would get me the extra bandwidth, right?

Then again, as you and some others elsewhere point out, the theoretical bandwidth on the ESX is not so important in the end, disks matter more.

I'm not sure if anyone is selling LH in Finland and, I have a feeling it's "not enough mainstream" solution for us. And no, we won't buy a switch from Dell.. Smiley Happy

0 Kudos
MikaA
Contributor
Contributor

We got one of those boxes for some project so I could try and get the test run on it within next weeks before it goes in actual use..

As Pasi (with maybe a slight vested interest in the matter Smiley Happy ) pointed out, in the long run it would be, if not stupid, not very wise to go with the MD3000i route. Yes, I'm sure it works and provides the oomph necessary but the administrative overhead to use all the fancy features would be too much, so they would be left unused. But then again, the route in our case may not be that long. I mean, how many 3 TB boxes does a small (<5000 people) school need?

Slightly off-topic: Traditionally we've always gone with the cheapest thing available (as a European edu institute we are bound by law to basically take the cheapest tender or be sued by the others (no laughing there! Smiley Happy), a tradition I would very much like to change.. Before this practice was limited businesses in Finland but now any garage-shop in Europe can bid and unless you are really really careful in writing the invitations to tender someone can make you buy e.g. a bunch of disks taped on a back of a motherboard..

But, on the upside, we have managed to get along with surprisingly crappy hardware sometimes.. Smiley Happy

0 Kudos
George_B
Enthusiast
Enthusiast

I have just received an EQL PS100E with is working out very nicely. The management and setup is a breeze. One thing I will say is that we had problems with one of our fan trays when the SAN initially arrived; the problem manifested itself in one of the disks in the array being seen as faulty. This was actually caused by a faulty drive environmental controller not seeing the disk spin up. I am slightly concerned that a fault in the fan tray can lead to a disk in the array actually going offline, but with two spare disk it was not a problem. The EQL support was first rate though and it was diagnosed and the correct part shipped in reasonable time.

0 Kudos
cmanucy
Hot Shot
Hot Shot

Slightly off-topic: Traditionally we've always gone with the cheapest thing available (as a European edu institute we are bound by law to basically take the cheapest tender or be sued by the others (no laughing there! Smiley Happy), a tradition I would very much like to change.. Before this practice was limited businesses in Finland but now any garage-shop in Europe can bid and unless you are really really careful in writing the invitations to tender someone can make you buy e.g. a bunch of disks taped on a back of a motherboard..

But, on the upside, we have managed to get along with surprisingly crappy hardware sometimes.. Smiley Happy

Just make sure you say "must be on VMWare HCL" when you bid it Smiley Happy

---- Carter Manucy
0 Kudos
MikaA
Contributor
Contributor

>
up. I am slightly concerned that a fault in the fan tray can lead to a disk in the array actually going offline, but with two spare disk it was not a problem. The EQL support was first rate though and it was diagnosed and the correct part shipped in reasonable time.

Our only experience with SAN is from EMC CX300, which is expiring but with it the only problem we've ever had in 3 years was a faulty disk a few weeks after we started using it. EMC has a phone home by default so I got a call (in the evening) from a support guy asking if he could come fix a faulty disk, which we didn't even know was broken.. That was nice. I understand EQL has something like this as well, is anyone using it?

0 Kudos
MikaA
Contributor
Contributor

Yes, three cheapo iSCSI boxes with SAS disks probably yields better throughput/performance than a single EQL/NetApp box but one needs to balance that against much easier administration and other advanced features expensive boxes offer, which are either not available at all or are quite labour intensive to use on cheap boxes.

0 Kudos
canadait
Hot Shot
Hot Shot

How can be be built for ultimate redudancy if the failure of one box

takes down volumes? To you mean ultimate redundancy because it has

multiple NICS and RAID?

0 Kudos
christianZ
Champion
Champion

>That was nice. I understand EQL has something like this as well, is anyone using it?

Yes we activated it and it works similar your EMC experiences - but not sure what kind of guarantee needed for that.

0 Kudos