VMware Cloud Community
clkmgt
Contributor
Contributor

ESX 3.5 with Celerra NS40 backend - iSCSI or Fibre?

We are currently in the midst of migration our datacenter over to ESX 3.5. We have about 60 servers in our organization, and about 20 of those have been virtualized so far. We have a Dell blade chassis with 4 1955 blades (2 dual core 5160 xeons, 16GB RAM each blade) running off of a single shelf of 146GB 15K drives on an EMC Celerra NS40 (3 4x1 R5 disk groups set up in a stripe, as per EMC recommendations for max I/O). We're currently using iSCSI for connectivity and have two ports on the Celerra dedicated for iSCSI. Each blade has 4 gig ports running to a dedicated network blade in our Cisco 6513 core switch. The SAN NICs are also terminated in this same blade.

My question is this: doing some load tests ala IOMeter and the real-world benchmark test from the open storage performance thread, I'm only seeing approx 5000 I/O or about 20MB/sec across the 15 spindles. This seems EXTREMELY low to me and rather than hit a performance wall in six months, I'm researching whether we should move to fibre channel for the interconnect. However, perusing the storage thread reveals that these numbers may not be that bad at all.

Basically, we are getting ready for a hiring frenzy (I work in a call center) and will be going from approx 500 employees to around 1000 in the next year. I want to make sure iSCSI will not be our bottleneck and would rather make the move to FC now, when we have no critical systems in production on our ESX cluster versus later when it would be much more difficult to migrate everything over.

I'm open to any suggestions. Thanks much for reading my diatribe - appreciate the help.

Wes

Tags (4)
Reply
0 Kudos
6 Replies
azn2kew
Champion
Champion

Read this article most likely depends on your designs and bandwidth but NFS is not a bad solution as well especially if you have NetApp gear.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

Regards,

Stefan Nguyen

iGeek Systems LLC.

VMware, Citrix, Microsoft Consultant

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Stefan Nguyen VMware vExpert 2009 iGeek Systems Inc. VMware vExpert, VCP 3 & 4, VSP, VTSP, CCA, CCEA, CCNA, MCSA, EMCSE, EMCISA
Reply
0 Kudos
John_S1
Enthusiast
Enthusiast

I can tell you that our experience with EMC iSCSI Clarrion arrays has been horrible. The performance and features they say they have is far different than what is delivered. Trying to do multipathing from a single host brought the array to its knees. Configuring the iSCSI provider to work with the san by itself was tedious and then they added powerpath into the mix. It would never have scaled out beyond that one host. When we went live with the device, we had so many customer complaints about performance we ended up with EMC onsite and a 38 hour support call. Yes, 38 hours of being shuffled around the globe from support center to support center. What a disaster.

EMC eventually could not fix the issue, trying to blame it on the cisco switches and fragmentation. Eventually, when they ran out of reasons why it wasn't there device, the did a forklift upgrade of the unit to a fiber channel model.

Now, our experience with Equallogic has been just the opposite. The device performed equal to or better then our cx320 fiber channel san. The ease of use was incredible. Expansion was a snap. Setting up the iSCSI backend was simple. The unit just worked.

So while it wasn't a Celerra, it was EMC and the experience was horrible.

Reply
0 Kudos
clkmgt
Contributor
Contributor

Thanks to you both for the input!

JohnS, you mentioned you were bumped to a fiber unit, how did that perform for you, especially in regards to iSCSI? Sorry to hear about your EMC experience, ours has been almost 100% positive with both the Clariion (old CX300i) and Celerra we have in production.

We considered Equalogic, but the Celerra NAS frontend (the Celerra can emulate a Windows File server, including VSCS - it takes about 30 seconds to "build" a file server) allowed us to completely kill about 15 different file servers, so that's 15 windows licenses saved as well as not having to have 15 virtual machines doing nothing but serving files...so in that regard it's been a good fit.

Reply
0 Kudos
John_S1
Enthusiast
Enthusiast

The EMC fiber SANs have been great. Performance compared to the Equallogic is SCSI is equal or slightly better in our testing. And for the costs, it should be! We do have a few Celerra NAS heads but they all connect to the fiber sans. They work well, but they do not handle a mixed windows / osx environment sharing the same volumes very well, so we've stayed away from them. That and getting Antivirus and quotas to work well with them has been quite the challenge.

clkmgt
Contributor
Contributor

Thanks John! We have EMC and VMWare coming out to our shop next week (hopefully) so we should be able to bounce some ideas off them then - if we moved to fibre, we'd probably bypassing the NAS frontend as we'd have to connect directly to the storage processors...(I think?)...good to hear that you're happy with the performance though.

I'm about dead from reading white papers and forum posts both here and over at EMC - thanks again for the input and help.

Reply
0 Kudos
RParker
Immortal
Immortal

Our data admins tried to convert us to NFS, our Unix Admin and VM Ware tech rep laughed in his face. Fibre cannot be beat, period.

NFS is only drawing comparisons to similar testing against iSCSI because they are on the same topology, but you cannot touch performance of Fibre, it wins every time hands down.

This is assuming 2g Fibre or better. With 8G Fibre upon us, I'd say Fibre has a LOOOOONG future...

Reply
0 Kudos