VMware Cloud Community
mudha
Hot Shot
Hot Shot

FC SAN

While configuring FC SAN, when we connect two 4GB FC card in Active-Active mode,

what is the bandwidth we get on each adapter? The total bandwidth would be 8GB

or 4GB only?

Reply
0 Kudos
6 Replies
Texiwill
Leadership
Leadership

Hello,

4Gb per channel. WIthout third party multipath drivers (ESX 4 only) that support link aggregation you will not get 8Gb just 4Gb.

ESX v3 can use an experimental load balancing but that still only gets you 4Gb throughput.


Best regards,
Edward L. Haletky
VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst
====
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'
Also available 'VMWare ESX Server in the Enterprise'
SearchVMware Pro|Blue Gears|Top Virtualization Security Links|Virtualization Security Round Table Podcast

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
SuperGrobi73
Enthusiast
Enthusiast

Hallo mudha,

when using 4G FC Cards, storage arrays and fabrics, the bandwidth will firstly be 4G! nothing more!

Starting with ESX 4.0 the ESX hyervisor is using all ports of your HBAs active! This is a big difference towards 3.x and older. If your storage is capable of Active/Active the question will be, if your storage array can use all ports to channel traffic to one LUN, then and only then there would be a possibility of aggregating the traffic up to "8G" FC (but keep in mind, 4G will stay 4G) or something like this.

The benefit of the new NMP Module is the possibility to use all SPs actively. Then using RR or Fixed as a PSP can bring You a big performance boost.

As you can see it is not only a question of teaming the HBAs and ports, it is much more a question of the fabric and the storage array You use, what is Your Design? Can you please describe your SAN?

C U Carsten

---

Mein Blog:

-- Mein Blog: http://www.datenfront.de
Reply
0 Kudos
mudha
Hot Shot
Hot Shot

does that mean that with ESX 4.0 with active/active config and two 4 Gb cards i can get 8 Gb BW?

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

Except you should note that "per channel".

If you've got two LUNs mapped, most cards will balance the LUNs across channels, even under 3.5i.

Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

To aggregate the FC link for a single LUN you need 3th part software (like PowerPath/VE).

Otherwise you can do a "mass aggregation", using different path to different LUN.

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
SuperGrobi73
Enthusiast
Enthusiast

Like others mentioned, a 3rd party SW like EMC PowerPath can aggregate your HBA channels / Ports.

This is just half the truth, for an 8 GB Uplink your Fabric and Storage Array must support this two!

e.g.:

HBA Aggregation, 4 GB Fabric, 4 GB SA, single Port will remain 4 GB FC Traffic for one LUN, but on the Host (ESX) side you can send and receive 8 GB FC traffic!

To get 8 GB out of one LUN You will need either an aggregation on the SA too, or an 8 GB FC SA and a Fabric!

Remeber we're talking about the access to a single LUN!

NMP Module will bring You a better throughput for amny LUNs over one HBA with more Ports or many HBAs with one port and so one.

---

Mein Blog:

-- Mein Blog: http://www.datenfront.de
Reply
0 Kudos