VMware Cloud Community
renndabull
Contributor
Contributor

iSCSI switch recommondations

Hi All,

I'm putting together a VI3 solution with an Eql PS100E, 2 HP DL380's and need some advice on a pair of gig switches that support the requirements of the PS100E (flow control, jumbo frames). I plan to upgrade our existing 10/100 LAN with a new gig backbone (only need (2) 48 port to cover the EU community. I was thinking about going with a pair of 8-16 port gig switches just for the iSCSI SAN and purchasing cheaper 48 port gig switches for the LAN core(maybe Linksys).

Is this a good move? Would it be better to invest in just 2 higher end 48 port gig switches and consolidate the LAN and iSCSI SAN traffic?

An info would be greatly appreciated!

0 Kudos
16 Replies
doubleH
Expert
Expert

I'm sorta in the same boat as you. I'm in the middle of rolling out our small VI3 infrastructure so I can't tell you my solution has been working for x months, but right now while I'm putting all the pieces together everything is working.

Summer '06 - Upgraded network from 3M 100fx fiber infrastructure to all new CAT6 with 2 HP Procurve 5406's. Each switch has 6 module slots to fill if you require. Right now I have 5 slots filled with 24 port 100/1000 ports. All workstations/servers/printers terminate in these 2 cores. I have VLANs for 1st floor/2nd Floor/Servers/Storage(iSCSI)

April '07 - Bought 2 HP DL385 g2's with a QLA4052' and 2xNC360T cards in each host. Bought a PS100E and put it in it's own Storage VLAN along with the HBA's in each ESX host. On the Storage VLAN I enabled flow control and jumbo frames.

For VMotion I bought a cheap HP Procurve 1800-24g switch. I only needed the 2 ports for VMotion, but the 8 port version is rack mountable. Works great so far.

hth

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
thechicco
Enthusiast
Enthusiast

Got EqualLogic PS300E's here....

We are using Cisco 3750G's (24 port - 3750G-24TS-1U) in a stack configuration, 2 members and a master. Works like a champ. The forwarding rate is the same as 48 port versions I believe, 38.7mpps. Easy to setup (doc's even on EQL site) and failover works great.

I've heard that HP and Allied Telesyn have some good offerings. ACR will certainly back up Allied Telesyn. I had to go Cisco because our network admin refuses to touch anything else Smiley Wink

Oh and definitely keep your iSCSI traffic on it's own network (not simply VLAN'ed off).

Good luck.

Message was edited by:

thechicco

typo.

0 Kudos
happyhammer
Hot Shot
Hot Shot

yeah we got a PS100 connected to 3750g's and all is working well, iSCSI traffic is on seperate vlan

advantage to 3750's is they can be clustered/stacked together and perform etherchannel across both switches, giving redundancy.

0 Kudos
renndabull
Contributor
Contributor

Thanks for the insight, it seems like the more I research the best config is to use separate switches for iSCSI and Lan traffic. Do you know who makes a small 8 port gig switch that support jumbos and flow control?

0 Kudos
doubleH
Expert
Expert

Best practice, yes have separate switches for iSCSI and LAN traffic, but it also depends on the environment. For me - with 120 users I don't need/want more hardware and want to keep it as simple as possible. My 5406's have plenty of bandwidth as well. One of the reasons I went with iSCSI is because I could use existing infrastructure and not have to have a separate FC infrastructure.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
0 Kudos
BenConrad
Expert
Expert

I've run into some issues with my switching infrastructure, here are a few things to watch out for:

\- Make sure the switch ports are not oversubscribed and/or make sure the switch backplane can handle close to the aggregate of the port bandwidth on the switch. I think all the newer 24/48 port Cisco switches meet & exceed this setup. In my case I have Cisco 6509's and the blades for this chassis vary widely on how ports & ASICs are configured. Best blade is the 6748-GE-TX.

\- Switch port buffer size. At the minimum you need 256-512KB of buffer space allocated to each gigabit port. the more buffer space you have the less you will need flow control and you will be able to take advantage of Jumbos.

Ben

0 Kudos
renndabull
Contributor
Contributor

Thanks for the input, I'll keep that in mind.

Are you using separate switches for your iSCSI traffic and LAN traffic?

0 Kudos
BenConrad
Expert
Expert

Yes, that is the only choice we had, we are running multiple VLANs. As an alternative, I would have liked to get (2) Cisco 6506's with (2) Sup720's each but that is a lot of coin and we don't have the space for them in the datacenter. We're going to have >96 ports by the end of the summer so 2 x 48 port Gb switches would not work for us.

Ben

0 Kudos
HuntAJ
Contributor
Contributor

We use HP ProCurve 5400zl switches for both SANs and LANs. I'm not limited by budget or any other constraint, but the ProCurve switches impressed us right away with ease of installation/configuration and no ongoing maintenance costs. Our first one out of the box was configured with jumbo and flow control and in service in about 20 minutes.

0 Kudos
murreyaw
Enthusiast
Enthusiast

I am using a pair of Cisco 2960G-48s. They are great switches. Use of VLANS allows me to leverage the switch for both LAN and Storage networks, while maintaining software seperation.

0 Kudos
renndabull
Contributor
Contributor

Thanks all, I think I'm going with 2 HP Procurve 2900-24, my vendor is throwing in one for free to help the sale. These support Jumbos and Flow Conrol and will give me some room to grow.

0 Kudos
Yps
Enthusiast
Enthusiast

HP Procurve doesn´t support jumboframe and Flow-control on same vlan/port, you can see this in the manual on one page.

I have tested with a 2824, and I got warnings in the systemlog.

I bought a pair of Cisco 3750-24g to our PS400E.

/Magnus

0 Kudos
murreyaw
Enthusiast
Enthusiast

This is a true statement. The procurve is limited in the software functionality.

0 Kudos
BenConrad
Expert
Expert

The 2900 supports jumbo and flow control on the same port[/b].

See the T12.06 Release notes:

Clarifications[/b]

The following clarifications apply to series 2900 switch documentation as of the T.12.00 release.

■ Enabling Jumbo Frames and Flow Control

The 2900 series switches support simultaneous use of Jumbo Frames and Flow Control, and the switch allows flow control and jumbo packet capability to co-exist on a port. (The earlier version of the Management and Configuration Guide incorrectly stated that these features could not be

enabled at the same time.)

Ben

0 Kudos
Yps
Enthusiast
Enthusiast

Thanks for info.

/Magnus

0 Kudos
Atamido
Contributor
Contributor

I'm looking at implementing a basic HA setup with two ESX servers and two iSCSI SAN boxes. We would trying to stay under $50k for the entire thing. As part of the HA, we want to use two different switches to connect the ESX boxes to the SANs. The ESX boxes will use different switches for their primary so outside of a failure, it will be one ESX box to one SAN. If we team each of the connections (2x 1Gb links) from each box, we wouldn't use more than 8 ports.

So we're looking for a switch that supports:

1. At least 8x Gigabit ports

2. NIC teaming (aggragation, 802.3ad, etc)

3. Jumbo frames

It looks like the Dell PowerConnect 2716 supports all of this, is less than $200, and is rated at:

\* Switching Capacity 32.0 Gbps

\* Forwarding Rate 23.7 Mpps

http://www.dell.com/content/products/productdetails.aspx/pwcnt_2716?c=us&cs=04&l=en&s=bsd

The 2724 is less than $300 rated for 48.0Gbps/35.6Mpps.

What is going to be the practical difference between one of these switches and the switches listed above that cost 3-10 times as much when it is only being used for iSCSI between a few devices? Is there really going to be a noticeable difference?

0 Kudos