Peter_Grant
Enthusiast
Enthusiast

iSCSI Multipathing vs. just making NICs Active/Active (vSphere)

Jump to solution

Hi

I have 2 physical NICs for in my dedicated iSCSI vSwitch and I understand if I want to use multipathing I need to create 2 VMKernel ports, allocate one to each NIC and connect them to the iSCSI initiator using the CLI.

Question is, what advantages does this new multipathing feature give over just teaming the NICs in an Active / Active configuration? Aren't both configurations just as redundant as each other?

Thanks

Pete

------------------------------------------------------------------------------------------------------------------- Peter Grant CTO Xtravirt.com
0 Kudos
1 Solution

Accepted Solutions
Paul_Lalonde
Commander
Commander

MPIO allows a server with multiple NICs to transmit and receive I/O across all available interfaces to a corresponding MPIO-enabled SAN. If a server had four 1Gbps NICs and the SAN had four 1Gbps NICs, the theoretical maximum throughput would be approximately 400 MB/s (3.2 Gbps).

Link aggregation via NIC teaming (or LACP, PAgP, 802.3ad, etc.) does not work the same way. Link aggregation does not improve the throughput of a single traffic flow (one source communicating with one destination). A single flow will always traverse the SAME path.

The benefit of link aggregation is seen when several "unique" flows (each with different source / destination) exist. Each individual flow will be sent down its own available NIC interface (calculated by a hash algorithm). The more unique flows, the more NICs being utilized, vis-a-vis more aggregate throughput achieved. Link aggregation will not provide improved throughput for iSCSI, although it does provide a degree of redundancy.

Hope this helps.

Paul

View solution in original post

0 Kudos
4 Replies
TobiasKracht
Expert
Expert

Very good question! That it really depends on the type of storage you are using....If your iSCSI vendor offers HA then you may need to use MPIO in which case the decision is made for you....

StarWind Software R&D

StarWind Software R&D http://www.starwindsoftware.com
0 Kudos
Peter_Grant
Enthusiast
Enthusiast

The storage is Lefthand...

Can anyone explain this?? I've posted this question twice... Smiley Happy

------------------------------------------------------------------------------------------------------------------- Peter Grant CTO Xtravirt.com
0 Kudos
Paul_Lalonde
Commander
Commander

MPIO allows a server with multiple NICs to transmit and receive I/O across all available interfaces to a corresponding MPIO-enabled SAN. If a server had four 1Gbps NICs and the SAN had four 1Gbps NICs, the theoretical maximum throughput would be approximately 400 MB/s (3.2 Gbps).

Link aggregation via NIC teaming (or LACP, PAgP, 802.3ad, etc.) does not work the same way. Link aggregation does not improve the throughput of a single traffic flow (one source communicating with one destination). A single flow will always traverse the SAME path.

The benefit of link aggregation is seen when several "unique" flows (each with different source / destination) exist. Each individual flow will be sent down its own available NIC interface (calculated by a hash algorithm). The more unique flows, the more NICs being utilized, vis-a-vis more aggregate throughput achieved. Link aggregation will not provide improved throughput for iSCSI, although it does provide a degree of redundancy.

Hope this helps.

Paul

View solution in original post

0 Kudos
VMmatty
Virtuoso
Virtuoso

If you haven't already read this, I strongly recommend reading this blog article written by some of the best folks from the major storage vendors. It gives a lot of details on what you're asking and does a great job of explaining MPIO in vSphere 4.

http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

Matt | http://www.thelowercasew.com | @mattliebowitz
0 Kudos