VMware Cloud Community
champcf
Enthusiast
Enthusiast

Multiple NICs for higher bandwidth to iSCSI?

I've been searching but haven't found a satisfying answer. My question is, is it possible to combine multiple NICs in order to achieve more bandwidth to a multiport iSCSI SAN device?

The scenario is that we want a good connection from an ESXi server to an iSCSI array and we're thinking that 1Gbps isn't fast enough. We would have an HP DL360 with another NIC for a total of 4 (or possibly 6) 1Gbps ports. The iSCSI SAN device is going to be an HP StorageWorks 2012i (MSA2000i) with dual controllers and has 4 1Gbps ethernet ports. We will probably directly connect the SAN to the ESX server via crossover cables until we have a need for more ESX servers. However, getting a gigabit ethernet switch now is also an option (HP ProCurve 2810-24G).

Reply
0 Kudos
5 Replies
weinstein5
Immortal
Immortal

It is possible - using the software iSCSI intiator you would need to create NIC Team vswitch and select the IP Hash Load Balancing method - this will give a better probability of utilizing all NIC in the NIC team because the outbound NIC would be chosen based on origninating and destination IP addreesed - this will require a physical switch that support link aggregation -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
champcf
Enthusiast
Enthusiast

Thanks, weistein5. I spoke with HP and they said that the MSA200i would not be able to perform such a function even though it has multiple ports. Are there any SAN devices that you can recommend?

Also, can you comment the fault tolerance of the setup that you described?

Thanks!

Reply
0 Kudos
dlhartley
Contributor
Contributor

I've been pondering the same question - however in all documentation that I read, attempting to team iSCSI NICs together will just result in unused bandwidth.

We're in a similar situation - one of our storage boxes is a network-attached iSCSI array - a Dell MD3000i. It has 4 gigabit ethernet connections, and it'd be great if we could somehow team these together in order to create a single 4gbps trunk to the box from the ESX servers, but from what I can find, "teaming" will not actually result in any increased bandwidth because ESX will always use the same NIC to communicate to the storage with.

On the ESX server you're able to set multi-path load balancing, but this doesn't look anything like link aggregation & i'm pretty sure it's more of a failover method.

Apparently ESX will use the same source mac address when communicating with the iSCSI storage (hence using the same card) even though you can team the cards together. I can't verify that fact myself, but have heard it from more than one source in the last couple of months.

I'm guessing that if HP said that the box isn't able to, and i've heard similar things from Dell & VMWare documentation, it's likely that iSCSI teaming is still a myth... It'd be fantastic to get working though!

Yattong
Expert
Expert

Hey

We are trying to do exactly the same thing at the moment and have come across this article.

Should help you understand what needs to be configured.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-cust...

http://media.netapp.com/documents/tr-3428.pdf

Let you know how we get on later...



If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

~y

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points ~y
Reply
0 Kudos
HSpeirs
Enthusiast
Enthusiast

It is possible to get greater than 1GB potential throughput to the MSA2012i - you will however want to use a switch rather than cross over cables, and you need to create at least two Logical Disks on the MSA.

When you create the logical disks they are assigned default ownership to one of the controllers. Disk 1 goes to Controller 1, Disk 2 to Controller 2, disk 3 to controller 1 etc. Then carve those logical disks into multiple LUNs.

On the MSA configure the iSCSI IP ports so that Port 1 on both controllers are on one subnet, and port 2 on another. eg:

Controller 1 - Port 1 - 192.168.1.1

Controller 1 - Port 2 - 192.168.2.1

Controller 2 - Port 1 - 192.168.1.2

Controller 2 - Port 2 - 192.168.2.2

Then on the ESX server create two vSwitches, each with a service console and vKernel. Put one of the switches on the first subnet, and the other on the second. eg:

vSwitch 1vKernel - 192.168.1.11

vSwitch 1SC - 192.168.1.21

vSwitch 2vKernel - 192.168.2.11

vSwitch 2SC - 192.168.2.21

When you scan for LUNs, you'll now have two paths to each LUN - one via the 192.168.x subnet, and one via the 192.168.2.x subnet.

You can then set the preferred path for each lun, using fixed paths, alternating the target - So Lun 1 goes to Target 1, Lun 2 to target 2, lun 3 to target 1 etc

With two pNics on each of the vSwitches, and at least two virtual disks with at least two luns per logical disk, you can theoretically get up to the full 4 Gb/s throughput between the host and the SAN.

H.

Reply
0 Kudos