VMware Cloud Community
pdrace
Hot Shot
Hot Shot

Switch meshing on HP Procurve switches

I am using 2 HP Procurve 3400 cl switches for ISCSI traffic between a pair of clustered NetApp filers and two servers that each have two Qlogic 4050c HBAs in them. They have no connection to our public network routers and switches.

I am trying to come up with a redundant design at the HBA level as well as the switch and netapp. On the Netapp side I can create a single virtual interface that spans switches. This will provide redundancy for a switch failure. This doesn't help if an HBA fials though as one of the Netapp connections is in standby and won't know that an HBA has failed. The standby HBA won't have a path to the netapp on the other switch.

So far it looks like linking the switches together and creating a "mesh domain" is my best option.

Anybody have any experiences with this, suggestions?

Thanks.

0 Kudos
11 Replies
Anders
Expert
Expert

I've implemented a mesh design for a regular network on ESX server 2.x.

Worked like a champ!

We pulled, plugged, rebooted, pinged, threw everything we got at it with little hickup.

Only thing we never figured out was a constant 2 mbit broadcast traffic on all ports,

but it didn't impact performance at all...

\- Anders

pdrace
Hot Shot
Hot Shot

That sounds hopeful for my situation.

Thanks.

0 Kudos
oreeh
Immortal
Immortal

Take a look at the switch manuals.

There are some limitations when using meshing.

Example: you can't use meshing and routing in parallel.

There are probably some more limitations, but these depend on the switch and the switch firmware used.

Message was edited by:

oreeh

and carefully read the firmware release notes

0 Kudos
oreeh
Immortal
Immortal

A word regarding meshing.

Meshing normally is used to have a redundant topology using more than 2 switches.

If only 2 switches are involved you can use trunks instead (and avoid the meshing limitations).

0 Kudos
pdrace
Hot Shot
Hot Shot

Thanks for the heads up.

I don't notice that listed as a limitiation in the guide I'm looking at but I don't plan to do any routing on these switches.

0 Kudos
pdrace
Hot Shot
Hot Shot

Thanks , I'll pass that on to my network people.

0 Kudos
pdrace
Hot Shot
Hot Shot

I had my network guy set up the switches in a mesh domain today.

Created a single vif on the Netapp side one connection to each switch.

Plugged one hba into one switch, one the other.

I then started some vms pulled a cable from the netapp, no problem.

Pulled the cable on the active hba, no problem.

Thanks

0 Kudos
Anders
Expert
Expert

Would you mind doing some test to check if there is a pause in IO when the data patch is switched on the Netapp?

Like pulling the plug on the active controller?

\- Anders

0 Kudos
pdrace
Hot Shot
Hot Shot

I think I pulled the plug on the active controller, least it was last I looked!

I didn't see any errors on the vm's system log after doing this.

I'll have to do some more testing where I'm doing something I/O intensive on the VM.

I'm having issues seeing the paths for a datastore now. When I open the properties of a datastore, it show nothing in the extents windows. If I click on "Manage Paths" it produces and Error pop up with the message: "Specified argument was out of range of valid values."

Parameter:"index"

I've see this before, I'm not sure what's up.

0 Kudos
pdrace
Hot Shot
Hot Shot

Would you mind doing some test to check if there is a

pause in IO when the data patch is switched on the

Netapp?

Like pulling the plug on the active controller?

I tested this by pulling the plug on the active controller while copying data to a vm. I don't know if there was a pause, but the copy continued and finished successfully. I also pulled the active network line on the netapp side (single vif) and it survived that also.

No popups or system log messages on the vm.

That being said now I'm having some major storage issues after upgrading the NetApp to Data On Tap 7.21P1 this past weekend.

0 Kudos
sgloeckle
Contributor
Contributor

Hi, just read your article.

We too have build a (full) mesh with 4 (2x2) Procurve 3500yl switches. It works fine.

Now we have trouble configuring the FAS3140 metro cluster (iSCSI) with load balancing and fault tolerance (2 x 2 ports per FAS to each of the 2 switches per side).

NICs in use for ESX servers: NC375T with software iSCSI initiator only!!!

Do you have any suggestions how to configure the nics on the FAS so that iSCSI is running smoothly with our mesh and ESX datacenter?

thanks a lot for your answers...

cheers

0 Kudos