VMware Cloud Community
whuber97
Enthusiast
Enthusiast

dell powerconnect 5424 switches for iscsi

Hi everyone,

Curious to know if anyone here has any experience using the dell powerconnect 5424 switches for iscsi storage networks? Are you pleased with them? Can you post benchmark results using them?

As far as I can tell - I can't find a switch that has all the features as this switch for the money... These switches support flow control and jumbos at the same time, 35 million pps throughput, 48gbps switching backplane, 6MB packet buffer size (this is huge - most other switches in this price point have 512k or 768k at most), port mirroring, trunking, vlans, qos, spanning tree, and so on.

If there is a better switch out there that supports all of these features at the $800 price point - I haven't found it in my research, just figured I would ask the experts here before making my final decision.

vExpert 2012, 2013 | VCDX #81 | @huberw
Tags (4)
Reply
0 Kudos
15 Replies
Josh26
Virtuoso
Virtuoso

What else have you got on your network?

The big thing with this sort of unit is just how well Dell's interpretation of some of these features plays with HP/Cisco/whatever's interpretation of these features. I'm not saying it's bad, I'm just saying check it out. We had a lot of experience with Netgears that had good looking specs until you connected them to a different type of switch and spanning tree went pants up and dropped networks.

Reply
0 Kudos
whuber97
Enthusiast
Enthusiast

Hi Josh,

Thanks for replying.

The rest of the network is made up primarily of HP ProCurve switches - Mostly 1800 and 2800 series. The reason why we are looking at the DELL switches for the iSCSI network is because HP can't touch DELL's price on a switch with the same feature set. To get a switch with the same features it looks like we would need a ProCurve 2900 which is no cheap date. Anything smaller on the ProCurve side like the 2800, 2600, 2500, or 1800 switches have buffers less than 768KB and won't support FC or jumbos at the same time - it's one or the other).

The existing procurves will have a single link to the iSCSI switches on a management vlan - that's it. Otherwise these DELL switches would be strictly for iSCSI traffic - so I wouldnt' think we would run into an issue such as the spanning tree problem you mentioned above.

Can anyone out there using this switch confirm if it is as good as it looks on paper?

vExpert 2012, 2013 | VCDX #81 | @huberw
Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

We have two of them connected up to a Dell/EQ iSCSI SAN. I do not have any performance metrics to back up my statements, but they've been functioning fine. One of the nicer features was that because they had been approved by Equallogic, they had a nice little instruction sheet on the exact steps to peform to make these switches work best with an Equallogic SAN (e.g. enable flow control & jumbo frames, disabling stp on all non trunking ports, disable storm control, and ironically, disabling iSCSI optimization). The only part I had to get used to was that I had been spoiled over the years with switches that had stacking modules. But I trunked 3 ports to eachother to provide the interconnect, and it's been fine.

Reply
0 Kudos
IRIX201110141
Champion
Champion

We have countless PC5448 and they all works fine. We use them also for the iSCSI SAN with great success.

Regards

Joerg

'Remember if you found this or others answers helpful do not forget to award points by marking an answer as helpful or correct'

Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

I've used this switches both for a MD3000i and a Equallogic SAN and work fine (like other switches).

Be careful only with Equallogic cause you have do disable the iSCSI acceleration that is not compatible with Equallogic MPIO.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
sbor69
Contributor
Contributor

Would it be possible for you to post the link to the instruction sheet from EQL that you are referring to?

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

I can't seem to find the document itself or the link, but here are my install notes, which reflected the doc exactly. Be sure to back up your config before AND after, and commit it to memory so that you don't have any surprises on reboot of the switches. Smiley Happy

Assumes one has typed in "enable" to elevate privledges.

1. Disable iSCSI optimization setting on all SAN switches.

Configure

No iscsi enable

Exit

Copy running-config startup-config

Exit

2 Enable "portfast" option to configure STP edge ports

Configure

Spanning-tree mode rstp

Interface ethernet g10 (and any other ports used for inter-switch connections)

Spanning-tree portfast

Exit

Exit

Copy running-config startup-config

Exit

3. Configure (enable) "flow control" (off by default)

Configure

Interface range ethernet g(1-48)

Speed 1000

Duplex full

Flowcontrol on

Exit

Exit

Copy running-config startup-config

Exit

4. Configure Storm Control (to disable it on all ports)

Configure

Interface range ethernet all

No port storm-control broadcast enable

Exit

Exit

Copy running-config startup-config

Exit

5. Configure (enable) jumbo frames

Configure

Port jumbo-frame

Exit

Copy running-config startup-config

Exit

6. Per the Equallogic quickstart guide, disable stp on all ports that don't trunk together the two switches. (p.10)

Reply
0 Kudos
sbor69
Contributor
Contributor

Ok, I'm familiar with these settings. However, I'm a little unclear on how you connect the two switches. I'm planning on using LAG with 4 physical ports, and trunking a separate VLAN across the LAG. Do you actually trunk the VLAN or is the LAG ports set to access. Would you be able to post or PM your config for LAG/Trunk on these switches? The way I see it, there are several ways of doing this.

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

Well, all I can tell you is what I did. I set the first 3 ports on each trunked with those instructions provided, and have P1 on switch 1 plugged into P1 on switch 2, etc. Not as clean as a stacking module, but oh well. If I recall correctly, these are the only ports that were configured slightly differently than the rest of the "edge" ports that go to the SAN, or to the ESX hosts. that setup should offer you the switch redundancy you're looking for. I'll assume the redundant connections from your ESX hosts will be using different network cards, so that you don't have an iSCSI outage if one of your dualport NIC's fails.

I don't have any VLANs on those dedicated SAN switches other than the default.

does that clear it up for you?

Reply
0 Kudos
sbor69
Contributor
Contributor

OK, so you are only using native vlan. That makes sense. It's just that I would prefer to set up a seprate VLAN for the iSCSI traffic, and leave the default VLAN 1 for management of the switches only. My question is then how to configure the VLAN across the LAG (trunk or access mode). I have submitted an issue to Dell, and will post it here if anything useful comes back.

Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot

Yeah, I definately don't do that here, but would be very interested in how that configuration looks. since they were dedicated switches, and I was more focused on getting things running, the nice-to-have setup of a seperate vlan for iscsi switch management never crossed my mind.

Reply
0 Kudos
sbor69
Contributor
Contributor

After discussing this with Dell, they came up with some recommendations that are not neccesarily easy to find in the documentation (for network experts however, this may be obvious).

The points they made were the following:

1. VLAN across the LAG is recommended, especialy if using jumbo frames.

2. No configuration of the physical ports used for LAG. Any config should be put on the port-channel. The ports will inherit this.

3. Spannnig tree should be enabled on the LAG ports, and any other inter-switch communications, but not on the port used for end devices (iSCSI).

4. Flow control on all ports.

With this I came up with the following config that I now will test with EQL P5000VX and 4x PE R710. I'm using ports 1 for management 2-20 for iSCSI traffic, and ports 21-24 for the port-chanel.

spanning-tree mode rstp

interface range ethernet g(2-20)

spanning-tree portfast

exit

interface port-channel 1

flowcontrol on

exit

interface range ethernet g(2-20)

flowcontrol on

exit

interface port-channel 1

switchport mode trunk

exit

vlan database

vlan 500

exit

interface range ethernet g(2-20)

switchport access vlan 500

exit

interface port-channel 1

switchport trunk allowed vlan add 500

exit

interface vlan 500

name iSCSI

exit

interface range ethernet g(21-24)

channel-group 1 mode on

exit

iscsi target port 860 address 0.0.0.0

iscsi target port 3260 address 0.0.0.0

no iscsi enable

interface vlan 500

exit

Reply
0 Kudos
wsaxon
Contributor
Contributor

Are you sure they have a 6MB buffer? I notice on their literature that they have everything clearly marked as MB except for the buffer size, which is marked Mb. 6 Mb as in megabit would be .75MB, or 768KB, which is more in line with the rest of the switches in this price range.

Reply
0 Kudos
TXProblemSolver
Contributor
Contributor

I have got great performance from the 5424 switches so far.

I am using 2 Dell PowerConnect 5424 switches as a dedicated iSCSI network. Using 2 allows redundant paths to the SAN in case one ever fails.

The SAN is EMC AX4-5i (dual storage processor) also bought through Dell and a couple virtual hosts running vSphere 4.0 on Dell servers.

IMPORTANT NOTES FOR OPTIMUM PERFORMANCE**:

I configured my switches according to this guide from Dell here. I've also attached the guide as a PDF in case it ever disappears from dells site.

The performance increase was substantial and immediately noticable.

If you don't optimize the switches for iSCSI you will probably be disappointed (but that goes for any switch).

Usually we are only Cisco but even the head network guy agreed there is no need to spend more for iSCSI switches.

This is especially true if you are only using them for iSCSI traffic and not mixing network and iSCSI traffic on the same switches.

I've been using these switches for about 6 months now, I plan to use them at another datacenter later this year if the model is still current.

I hope this article helps someone with their decision.

*You also need to configure/enable jumbo frames on the ESX hosts to take advantage of jumbo frames.

*Spend the money to buy dedicated 5424 switches that only run iSCSI traffic. Performance is great and these switches are cheap so why not run a dedicated iSCSI network and skip the vLANS and sharing traffic with other switches. I am glad I made the decision to run a dedicated iSCSI network on these switches.

Reply
0 Kudos
RTatis
Contributor
Contributor

I know this is an old thread but I came across this thread when searching for an accurate specification on what the actual Packet Buffer Memory on this switch is. The product page shows 6Mb(as in Megabit, not Megabyte!) and the user manual actually says 2Mb(Megabit again, not Megabyte).

I reached out to Dell technical support to get a response from them and what they informed me was that the accurate specification for this is

6Megabits, which is equivalent to .75Megabytes or 768kilobytes and NOT 6Megabytes as some are quick to think when they first look at the product page. Like wsaxon stated, this 768kb is more in line with other competitive switches in this price range, but slightly more as others seem to offer 512kb.

Here's the line from the Dell chat rep in answering my question.

12:38:48 PM     Agent   S&P_Eligio It is actually Megabit, not Megabyte.

12:39:52 PM     Agent   S&P_Eligio 2Mb is the on chip packet processor....  6Mb is the total packet buffer memory.

Hope this helps others that run into this thread as I did.

Reply
0 Kudos