Hi, I've got 2 ESX hosts, connected via a Brocade FC switch to a Dell/EMC AX150 FC SAN. The SAN doesn't host any data except VM's. In each ESX host I have 1 Qlogic QLA 2432 Dual Port HBA. So, with the 4 connections from HBA's to the switch, and 2 connections from each Storage Processor on the SAN, I have 8 total connections to the FC switch. Since there is nothing on the SAN but VM data, I've currently got the zoning on the FC switch configured so that all the server ports can see everything on all the SAN ports.
However, I was speaking to a tech support rep at Dell last week and he suggested that I should not have it setup this way, as it will cause many collisions and decrease performance. Can anyone confirm this is true? And if so, what is the recommended topology for a setup like mine? How should I configure the zoning to maximize performance and reliability? Thanks.
You should setup zones for each pair of initiator - target that has to be connected. This is the recommended EMC setting. So in your case this would be:
each hba to one of spa ports -> 4 zones
each hba to one of spb ports -> 4 zones
you can zone also
each hba to the other one of spa ports, but that is not necessary.
I'm new to zoning as well and have the same setup as Subversive except with two Brocade FC switches. I introduced multi-pathing and created a single zone (VMware_ESX). I placed all HBA and SP WWPN's into this single zone without any issues. However, you’re stating you should always isolate your initiator (HBA WWPN) and target. Am I correct?
An example configuration would look something like this, correct?
Why would it not be okay to do this?
Single initiator/target zoning is not only encouraged by EMC, it's a pretty solid way to approach zoning period.
There are several safety's to follow in storage:
1. If you have multiple fabrics, name each zone and alias uniquely, this prevents problems in case of an accidental merge(i.e. name it thusly: SRV_FABRIC_PORT--esxhost1_2_2 would be esx host 1, on fabric 2, using the second hba.
2. Use single init/targets, this prevents a single hba going bonkers from taking down all of your paths from that host. Remember, everything in that zone can see everything else. A little research into FC will show you why this is not attractive.
3. Always separate tape to it's own private vsan's if possible
But then it's a pain to remove the fabric one instances from the second fabric....If they have different names (we use the same name, but put S1 or S2 for the needed fabric), then it's a simple matter. Yes, with the same name, but different contents, your fabrics will still work, but cleanup's a *****. It's far easier to just tag a number to the name somewhere...It's a standard that most SAN admins use, and it saves your neck on occasion. Especially in a bladed environment, where your hba's are only different by an octet or two.
Huh? How did the fabric one instances get into fabric two?
I do use unique alias and zone names, but this is not for easy cleanup - the reason is to allow an easy merge if this is needed.
I take it you've never had or seen an accidental fabric merge then.....Yes, it has other reasons(easy ability to find things, intentional fabric merges, etc) but it will absolutely save your ass in the event of an accidental fabric merge.