VMware Cloud Community
jreininger
Enthusiast
Enthusiast
Jump to solution

SAN Connectivity? -- On Dell PowerEdge M600 blades - HBA fiber run question.

We just got a new Dell PowerEdge M600 w/ 8 blades (ie half populated). Each blade has the optional HBA cards on the blade motherboard. We bought (2) Brocade 4424 fiber switches that are installed on the rear of the Dell chassis. This means that I have 8 rear fiber ports each on each 4424 card. We had Dell come setup the SAN.

Our fiber is setup so from the back of the chassis I have ALL the 16 fiber ports going to an external Brocade 5000 FC switch. Then from the Brocade 5000 switch 4 of the ports are hooked up the Storage Processors.

My understanding at any time there can only be a single FC path from the host to the SP. What is the point of have SO many fiber runs going from the chassis internal (4424) switch to the (5000) switch if only 4 runs make to the SPs. Additional servers are attached to the external 5000 FC switch and are presented LUNS as well.

Why do we need so many runs? I could understand if there is there some way to do 'link agitation' on those fiber runs. But in this case since only a single CX300 is hooked to the External Brocade 5000 switch what are all those fiber runs doing for us?.

very simple drawing attached.

Thanks.

VMware VCP 3.5 VMware VCP 4.0 VMware VCP 5.0
Reply
0 Kudos
1 Solution

Accepted Solutions
mike_laspina
Champion
Champion
Jump to solution

The switch blade is providing 8 4Gb FC physical connections and 24 logical ports X 2 moduals this maps to your host blades, so over the entire system you have the bandwidth of 1 physical FC per blade.

The FC switch blades can be configured as 2 ISL's to the main switch and the logical ports can map out your paths. So yes FC in some ways can act just like ethernet switches.

The CX300 is an active/active SAN meaning initiators can access shared storage on multiple SP's and each initiator can only have 1 active path (SP) at a time. Here you will likely have 8+ initiators so its 2 to 1 subscribed to each SP.

The CX300 can not exceed the maximum BW but many highend ones could. For this config the CX300 seems to be heavily undersubscribed in bandwidth.

http://blog.laspina.ca/ vExpert 2009

View solution in original post

Reply
0 Kudos
5 Replies
kjb007
Immortal
Immortal
Jump to solution

That is a lot of bandwidth. Typically, the internal fiber switch will in essence trunk the connections from the servers behind the switch to an upstream fiber switch. This will lead to added redundancy of the fiber paths you have available.

8 ports per switch seems a bit excessive to me, as far as an internal fiber switch goes. Typically, I've seen 4 port uplinks. How many blades are in your chassis? Are you sure this isn't a fiber pass through instead of an actual switch?

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
mike_laspina
Champion
Champion
Jump to solution

Hello,

I would be concerned about only one switch, two would be better. What happens when you need to update the firmware, or one collapses dies etc. Best practice is to have two physically separate switches on separate power paths.

http://blog.laspina.ca/ vExpert 2009
jreininger
Enthusiast
Enthusiast
Jump to solution

True, Thanks.

I forgot to mention this is a test lab but this same gear will be used in production (we have 3 more whole setups like this soon to setup). In production, we will be using 2 external switches for fault tolerance so each 4424 will have uplinks to its own 5000 switch. Then the 5000 switches will hook to each SP.

VMware VCP 3.5 VMware VCP 4.0 VMware VCP 5.0
Reply
0 Kudos
jreininger
Enthusiast
Enthusiast
Jump to solution

Just 8 blades in the chassis (it can hold 16). And in production the two other I will be setting up will be fully populated. Smiley Happy

Its a real switch ( I am aware dell makes cheaper pass thoughts for the nics and the HBAs). This is the link to the unit:

http://www.brocade.com/partners/Dell_M4424_Blade.jsp

I still cant see why one would need more bandwitdh than the SP themself could ever dish out. UNLESS you had multiple SANs hooking to the external switch, which would be nice in the case of 16 blades all crunching on the SAN LUNs as addtion SP would be there to push data out.

I am also still at a loss because I thought the whole "active/passive/most recently used" deal with SAN ensures that only a single path always exists from host to SP. Meaning it cant work like Layer2 NIC switching which has networking protocols for link aggregation (in this case multiple paths get created based on source or Dest. MACs). Or does this exist for SANs too?

VMware VCP 3.5 VMware VCP 4.0 VMware VCP 5.0
Reply
0 Kudos
mike_laspina
Champion
Champion
Jump to solution

The switch blade is providing 8 4Gb FC physical connections and 24 logical ports X 2 moduals this maps to your host blades, so over the entire system you have the bandwidth of 1 physical FC per blade.

The FC switch blades can be configured as 2 ISL's to the main switch and the logical ports can map out your paths. So yes FC in some ways can act just like ethernet switches.

The CX300 is an active/active SAN meaning initiators can access shared storage on multiple SP's and each initiator can only have 1 active path (SP) at a time. Here you will likely have 8+ initiators so its 2 to 1 subscribed to each SP.

The CX300 can not exceed the maximum BW but many highend ones could. For this config the CX300 seems to be heavily undersubscribed in bandwidth.

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos