VMware Cloud Community
rickardnobel
Champion
Champion
Jump to solution

Why multiple vmkernels for iSCSI multipath?

I am a bit confused about iSCSI multipathing. What is the reason for creating two vmkernel interfaces and then use the command line tools to bind them to a new vmhba? Is this different for using one software iSCSI initator on a vSwitch with multiple VMNICs, connected to different physical switches?

My VMware blog: www.rickardnobel.se
0 Kudos
1 Solution

Accepted Solutions
denisbaturin
Enthusiast
Enthusiast
Jump to solution

There is the same discussion in russian vmug thread.

iSCSI Multipathing is not about paths, its about pairs - initiators(vmk interfaces) and targets.

If you have two targets

  • and you need failover only - 1 vmk is enough (nics in active/standby mode)
  • if you need loadbalacing
    • and you can use Link Aggregation + IP Hash - 1 vmk is enough(PSP is Roundrobin and nics in active/active mode)
    • if you cant use LA - 2 vmk are needed.
----- Think Twice Before Installing Something

View solution in original post

0 Kudos
13 Replies
FranckRookie
Leadership
Leadership
Jump to solution

Hi Ricnob,

You can find a very interesting article about iSCSI multipathing here.

Hope it helps.

Regards

Franck

depping
Leadership
Leadership
Jump to solution

Chad explains it very well in his article. I would recommend reading it. Let us know which type of iSCSI array you are running so we might be able to give some additional hints/tips.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

Duncan wrote:

Let us know which type of iSCSI array you are running so we might be able to give some additional hints/tips.

Thank you Duncan, I do not however have any iSCSI array, but is just interested to understand the technology. From the link from FrankRookie I think I begin to get some parts of it.

Is it correct that if using one Vmkernel port on a vSwitch with multiple VMNICs to the different physical switches, load balancing policy = Port-id, and the storage side has a similar setup, then this is not really "multi-path", even if it in some way allows it to be multiple physical paths between the host and the iSCSI san?

My VMware blog: www.rickardnobel.se
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Hi,

Creating a team of two NICs, with one VMKernel, means that one VMkernel -> SAN is your "path". Whilst that network may have two physical paths it can use, the iSCSI interface doesn't know anything about that, and hence you don't actually have a multipath environment that the iSCSI interface can tune appropriately.

A network team doesn't do any of the "active/active" sort of operation that is possible on a multipath nic.

depping
Leadership
Leadership
Jump to solution

I hope I understand your question correctly

If you would have a single VMkernel but multiple NICs with Port ID load balancing you would still only have 1 path. The reason for it being that the VMkernel will be assigned a vmnic when thid load balancing mechanism (Port ID) is used. So it will never ever use that other path unless the originally assigned NIC fails.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
rickardnobel
Champion
Champion
Jump to solution

Thanks for your replies. So "Multi-path" is essentially not the ability to fail-over to a different path in case of failure, but to use multiple paths at the same time?

And even if setting up IP-hash based Load Balancing (and having switches with some multi-switch etherchannel support) and the iSCSI SAN having multiple IP addresses, so the iSCSI traffic gets on both VMNICs (perhaps by careful choosing the IP adresses...) - then this will still not be "real" multipath?

My VMware blog: www.rickardnobel.se
0 Kudos
phspok
Contributor
Contributor
Jump to solution

Guys

I started a related thread :  http://communities.vmware.com/message/1674345#1674345

Bottom line is:

Is it supported to use normal IP source/dest network multipathing for the vSwitch that the

software iSCSI adapter is going to be using? If the software iSCSI is connecting to

different IP's on different Arrays (or Storage Procs) then it would presumably

use a different vmnic for each target connection.

Or, do you HAVE to do the claimrule binding to do software iSCSI multipathing?

I am trying to figure why anyone would want to voluntarily limit themselves to

just storage round robin multipath, when you could be using IP hash.

Thanks

Matt.

0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Hi,

No, it is not supported.

iSCSI has no knowledge of whether upstream connections are up or down. With multipathing it does.

0 Kudos
depping
Leadership
Leadership
Jump to solution

Round robin --> changing paths after X amount of i/o's

IP-Hash -> selecting path based on hash outcome of source and destination IP meaning that it will stick with this path unless a NIC fails.

it is not the same!

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
phspok
Contributor
Contributor
Jump to solution

Duncan

Many thanks for the reply, what you say is exactly what I believed

I needed some clarification that that is actually what would happen

and that there were not any "unsupported" issues around just using NIC Teaming

"If" the ESX is accessing different destination Arrays with a similar load on each

then IP hash seems reasonable, if however most access is to a particular array

then the Storage round robin would be better.

Do you know what the io size is before it switches path? not critical, but my students

are bound to ask.

Many Thanks again

Matt.

0 Kudos
phspok
Contributor
Contributor
Jump to solution

Sorted I "think"

[root@sc-server02 ~]# esxcli nmp roundrobin getconfig -d t10.9454450000000000000000001000000071610000F0000000
Byte Limit: 10485760
Device: t10.9454450000000000000000001000000071610000F0000000
I/O Operation Limit: 1000
Limit Type: Default
Use Active Unoptimized Paths: false
[root@sc-server02 ~]#

0 Kudos
depping
Leadership
Leadership
Jump to solution

1000 is the default indeed. you could change it based on the best practice of the array vendor as some recommend other values.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

0 Kudos
denisbaturin
Enthusiast
Enthusiast
Jump to solution

There is the same discussion in russian vmug thread.

iSCSI Multipathing is not about paths, its about pairs - initiators(vmk interfaces) and targets.

If you have two targets

  • and you need failover only - 1 vmk is enough (nics in active/standby mode)
  • if you need loadbalacing
    • and you can use Link Aggregation + IP Hash - 1 vmk is enough(PSP is Roundrobin and nics in active/active mode)
    • if you cant use LA - 2 vmk are needed.
----- Think Twice Before Installing Something
0 Kudos