admin
Immortal
Immortal

Cisco 3750's vs. Dell PowerConnect 6248's

We all know Cisco is the defacto standard in network equipment and that the 3750's or a 6500 for that matter Smiley Happy are the perfect compliement to a highly available and performing VMWare (or any virtual) environment. But I am curious how the PowerConnect 6248's (or 24's) stack up against the 3750 for an iSCSI network? We are all cisco here (about 150 switches company wide) but I need to get two new 48port SAN switches in play immediately. Problem is, due to cost I am going to be forced to wait several months before seeing a new stack of 3750's for this environment. So instead I was contemplating purchasing cheaper (does cheaper always mean not as good?) 6248's instead...or atleast for the year?

Being Dell's top-end enterprise oriented switch models I figure they cant be to bad? I do know they use an open standard cli which would take a little learning over the standardized cisco ios but whats the general concesus on the performance and reliability? I have read pros and cons from varying sources but it seems a lot of the feedback is running them as core switches. Not that SAN traffic isnt a constant strain, but were not talking about a ton of additional overhead for layer routing, vlan taggin, QoS, SPT, etc. Just simple jumbo frames and flow control. I was intending to run them stacked.

Thoughts and opinions please.

0 Kudos
8 Replies
kac2
Expert
Expert

we used cisco 3750G because I come from a Cisco background and that's what I feel comfortable deploying. I have a few collegues that are impressed with HP ProCurve. I've also used stacked Nortel 5520's and was highly impressed with the Java configuration GUI, even though it's not a great time for Nortel

admin
Immortal
Immortal

Thanks for weighing in kac2. My experience is mostly with Cisco, Nortel and Juniper so although Im not reluctant to purchase a couple of these switches (due to the cost) I am concerned about reliability and whether its worth the time when all of our gear is cisco. Our network guys dont care since they really dont like working with our esx environment anyways due to the ease in which they can be blamed for problems (prove its not a network problem ha ha) and would rather see the environment isolated from their administration. I would hate to stake reliability on these. The flip side to that is, a handful of these can be purchased for less than the cost of a single 3750-48g-t so you could cretae a 3-4 unit stack and cover yourself against failure, but I am going to be disappointed in the performance... I would still like to know how the performance would hold up against 4 ps6000xv's, and 11 hosts, about 400 vm's.

Anyone able to share some experience or opinions? Please do.

0 Kudos
kac2
Expert
Expert

Not sure how they would hold up against the ps6000 but you could upgrade the the 3750E and the transfer rate on the backplane is 2x the speed of 3750G.

0 Kudos
s1xth
VMware Employee
VMware Employee

My opinion...go with the Dell PC switches. ... just my opinion though.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
Rumple
Virtuoso
Virtuoso

From a performance standpoint they shouldn't have any real problems. I would assume the CPU and backplane is good enough to hammer the switches so that they can pass their full complement of traffic without buckling.

If you plan on living in the GUI for management then you should be ok.

The IOS will quickly drive you crazy if you live in a cisco world.

Personally unless I absolutely had to have 48 ports, I'd buy a couple 24 port cisco's over the 48 port Dell's....mainly because Cisco is going to be able to help you if you hit any routing issues or bugs in the switches, whereas dell...I wouldn't bet the farm on it...

AndreTheGiant
Immortal
Immortal

But I am curious how the PowerConnect 6248's (or 24's) stack up against the 3750 for an iSCSI network?

If you plan to create a dedicated iSCSI network consider to use the PowerConnect 5xxx series.

The 6xxx has routing functions not needed for this kind of network and the 5xxx has specific iSCSI optimizations.

Note that PowerConnect switches have a single integrated power supply, but you can solve this problem in two way:

  • plan a good iSCSI network redoundacy

  • use and additional external power supply (that can power 4 power connect)

Andre

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
admin
Immortal
Immortal

Thanks guys. So noone has experienced what you would consider show stopping issues with the 62xx series?

To answer a couple of the questions, we do require 48 port switches. In fact we require a minimum of 3 to accomodate our cabling needs. the 5xxx series does not offer stacking modules which is why the 62xx is the only viable option. As such were are looking at purchasing a handful to accomodate HA at the switching with the improved throughput of the backplane vs. channeling 4-8 copper ports for the aggregate 10gbps.

0 Kudos
shoulda
Contributor
Contributor

Scott,

This might be a little late of an answer, but both switches perform really well. In fact in some cases you might notice better performance on the Dell switches even over the Cisco Switches for ISCSI (higher packet buffer mem helps flow control).

For ISCSI it's super easy to setup the switches and since it's isolated from the rest of the network a pair of Dell/HP/Force10/Extreme/ETC switches that are stackable and provide the ability to do some standard things like Jumbo Frames / Flow Control and portfast/etherfast.

All In all I have implemented both the Cisco and the Dell models in both the LAN and SAN side they both work well. The Powerconnects are probably 75% less than the Cisco units so if I have one headache with being transfered around in support and it saves me 10's of thousands of dollars it's probably worth it, or you can opt for an advanced service plan that might add a little cost to the Dell and have smartnet type support and still be 50% less.

0 Kudos