VMware Cloud Community
mazorax
Contributor
Contributor

Load Balancing or Multipath

I am currently running ESX 3.5 on a dell 2950 with 4 network cards. Two dedicated to network taffic and 2 dedicated to SAN traffic. I have a Dell MD3000i san and would like to load balance or use Multipath. Each one of the Storage Devices that point to the SAN have 4 paths. I have tried all sorts of settings to get it to work but havent had any luck. The setup is 2 dual port controllers in the SAN. 2 Dell 5324 switches , each one of the dual port controllers are connected to each of the dell 5324's so basically 4 cables and 2 connected to each switch. One Card from the esx host is plugged into one switch and the other is plugged into the 2nd switch. Can anyone provide any insight to this?

Thank You!

0 Kudos
15 Replies
Texiwill
Leadership
Leadership

Hello,

Software iSCSI provides redundancy (failover) and not multipath/load balancing.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
kukacz
Enthusiast
Enthusiast

This is only true for ESX software initiator. There is another way, how to connect iSCSI LUN to VM - through the software initiator running inside virtual machine. When you attach more vNICs routing to different physical ports to the VM, then you can aggregate throughput - load balance. I don't know the Dell array, but this way it works for LeftHand or iStor arrays.

--

Lukas Kubin

0 Kudos
bjselman
Contributor
Contributor

...pulling thread out from under the rug...

"When you attach more vNICs routing to different physical ports to the VM, then you can aggregate throughput - load balance."

Does this require the use of M$ NLB on the vNics? I wonder what the configuration overhead would be since you have to set it for multicast only, right?

0 Kudos
kukacz
Enthusiast
Enthusiast

No, it doesn't employ Microsoft NLB. Instead it uses either:

  1. Multiple Connections per Session feature of Microsoft iSCSI Initiator, or

  2. vendor-specific MPIO driver for Microsoft iSCSI Initiator.

Both must be supported by iSCSI vendor, they should tell you. Example of 1. are all iStor arrays, example of 2. is LeftHand SAN/IQ.

--

Lukas Kubin

0 Kudos
bjselman
Contributor
Contributor

we have the SANiQ MPIO driver. in order for it to work properly, the VM needs at least two vNics? i'll check my LH documentation.

thanks lukas.

0 Kudos
kukacz
Enthusiast
Enthusiast

One vNIC is OK if it is bound to vSwitch configured to load balance multiple physical NICs traffic - with appropriate physical switch support configured too. LeftHand SAN/IQ will automatically create one connection between the vNIC IP and each cluster node's virtual IP. As there are multiple address pairs involved in the traffic, the ESX's IP based load balancing will bring an increased throughput effect.


--

Lukas Kubin

0 Kudos
kjb007
Immortal
Immortal

Check the thread regarding driving traffic through multiple NICs using sw ISCSI initiator.

http://communities.vmware.com/message/913606

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bjselman
Contributor
Contributor

so mac-based load balancing won't have an increased throughput effect? remember, my switches only mac sa/da load-share. so i'm not going to get inbound/outbound LB, nor multipathing with the LH MPIO driver?

am i SOL?

0 Kudos
kjb007
Immortal
Immortal

In order for your MPIO driver to work, then you have to have multiple paths to the storage. If you're using mac based algorithm, then you will need either 2 mac on the source or 2 mac on dest, so either setup multiple LUNs and IPs on your iSCSI target, and map one LUN per IP, that will give you multiple mac sa-da pairs, so you should get load balancing of a sort.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bjselman
Contributor
Contributor

"then you will need either 2 mac on the source or 2 mac on dest, so either setup multiple LUNs and IPs on your iSCSI target, and map one LUN per IP"

What is the "or" part of this?

0 Kudos
kjb007
Immortal
Immortal

Or ; )

Have multiple NICs as described and map the targets through one adapter , one LUN.

You can also:

Setup multiple vmkernel ports on your vSwitch, with multiple IPs, and map the same LUN through different paths on the hba side instead of the target side.

All have the same effect, multiple source MAC or multiple dest MAC.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
bjselman
Contributor
Contributor

that is funny in one way. not funny in another.

the message board is pretty darn smart aint it....

0 Kudos
dominic7
Virtuoso
Virtuoso

I have an IT emergency, what is Scott Homles' number?

0 Kudos
bjselman
Contributor
Contributor

kjb007,

i am not using an HBA. the SAN uses a virtual IP that acts as a designator for the best path to the data.

I just got off the phone with Lefthand. They said MPIO will ONLY work in that scenario (multiple Nics) IF it is a physical server, but it doesn't work in a VMware environment. (and they don't support it in a VMware environment).

its like my brother shopping for a brand new truck and being sold on the new and improved fuel injector that will give him 40 mpg. when he gets it home, he realizes he actually only got 20 mpg. tough sell.

i'm done. i can ping. that counts.

thanks again!

0 Kudos
kukacz
Enthusiast
Enthusiast

BJ, if your switches are 2900 like mine, then it should work fine with LeftHand as it did to me. Whatever balancing algorithm HP really uses internally, it works with IP-hash method set on VMware.

LeftHand's MPIO works the way it creates multiple connections from the single IP port of your VM to each cluster node.

--

Lukas Kubin

0 Kudos