VMware Cloud Community
dwujcik
Contributor
Contributor

3.5.X Any hopes for real multipathing instead of just failover/RR?

The rumor mill at work has been trying to convince me that 3.5 has actual multipathing, allowing for more than one active path at a time for increased speeds over fiber HBAs...

Can anyone verify that this is true/false? I can't seem to find any verification in the release notes/etc...

Thanks,

-- Dave

Reply
0 Kudos
10 Replies
RParker
Immortal
Immortal

TRUE!

And that's my final answer Smiley Happy

Reply
0 Kudos
dwujcik
Contributor
Contributor

So... I can put 4 HBAs in an ESX box and actually push data across all 4 HBAs at once?

Can you explain your final answer in more detail?

Thanks,

-- Dave

Reply
0 Kudos
Jwoods
Expert
Expert

Multipathing via Round-robin balancing. First of all, its "experimental" meaning VMware won't support it in a production environment.

You can enable it on a 3.5 host within the VIC. Select the host / configuration tab / storage adapters. Highlight an hba and right-click a path and select manage paths. Under "Policy" hit the Change button and select the Round-Robin option.

Check out the doc here -->

Excerpt:

To achieve better load balancing across paths, administrators can specify that the ESX Server host
should switch paths under certain circumstances. Different settable options determine when the ESX Server
host switches paths and what paths are chosen.

When to switch – Specify that the ESX Server host should attempt a path switch after a specified number
of I/O blocks have been issued on a path or after a specified number of read or write commands have been
issued on a path. If another path exists that meets the specified path policy for the target, the active path
to the target is switched to the new path. The --custom-max-commands and --custom-max-blocks
options specify when to switch.

Which target to use – Specify that the next path should be on the preferred target, the most recently used
target, or any target. The --custom-target-policy option specifies which target to use.

Which HBA to use – Specify that the next path should be on the preferred HBA, the most recently used
HBA, the HBA with the minimum outstanding I/O requests, or any HBA. The --custom-HBA-policy
option specifies which HBA to use.

Reply
0 Kudos
jhanekom
Virtuoso
Virtuoso

You could utilise all available HBA's even on ESX 2.1 and 2.5 if you had an active/active SAN. You can still do the same with ESX 3.0 and 3.5. (Also possible with active/passive SANs to a lesser extent, as it's not possible to effectively control which path a LUN will use from the ESX side.)

Active/active SANs allow you to use a "fixed" multipathing policy on ESX, allowing you to spread different LUNs over different paths. It's not perfect, since the load balancing does not aggregate bandwidth (i.e. a LUN peaking in activity won't be able to use more than one HBA at a time), but it is quite useful nonetheless and surprisingly effective once the total number of LUNs in your environment start increasing.

What the new experimental features in ESX 3.5 add is the ability to effectively aggregate multiple paths. In my view, with 4Gb FC this may be less useful than you may think, since it's tough enough for a disk subsystem to saturate that amount of bandwidth today on a pure streaming basis, let alone with normal workloads. I've not seen benchmarks yet that reliably informs that view, though.

Still, it's a big step forward for storage functionality in ESX.

Reply
0 Kudos
dwujcik
Contributor
Contributor

From the above two responses, I gather that my answer is "no."

I'm not looking for RR or any type of failover. I want to put 4 HBAs in my ESX server and stream 4x more data, over all 4 paths, from a single VM, to a single LUN, all at the same time.

My SANs and switches are more than capable of supporting this, but I guess ESX still isn't there yet Smiley Sad

Does that explaination make more sense?

Thanks,

-- Dave

Reply
0 Kudos
Jwoods
Expert
Expert

You're looking for something similar to MPIO (as most of us are) which is not yet available for ESX. RR features are a step forward, but still not the same. Maybe Vi4? Smiley Wink

Reply
0 Kudos
dwujcik
Contributor
Contributor

Hmm, what about this new feature that allows individual VMs to have a WWN that the SAN/switches can see?

Would the VM also be able to see the SANs WWN in the same manner?

If so, would VMware show all available paths to the VM...say...allowing me to run powerpath on the VM itself and push more bandwidth that way?

-- Dave

Edit: Aww, someone already marked my question as "Answered" and gave out my points... I totally am not done yet...grrr...

Reply
0 Kudos
Jwoods
Expert
Expert

NPIV will allow you to make all of your san paths available to the VM. Still in this instance only one path is active at any time.

Another guide for your reading pleasure (NPIV page 90)

Reply
0 Kudos
mreferre
Champion
Champion

> I'm not looking for RR or any type of failover. I want to put 4 HBAs in my ESX server and stream 4x more data, over all 4 paths, from a single VM, to a single LUN, all at the same time

If I can, I would put a word of caution on this. Keep in mind that VMware has never been meant (nor designed) (at least so far) to run EXTREMELY fast a single vm / workload. This is true from multiple points of view and subsystems (storage included). What ESX does very well is aggregating lots of low-medium demanding workloads and share resources.

Think at it like a bus ........ rather than a F1 where Mr Schumacher (your vm) can take advantage on its own of the 4 x high performing Bridgestone tyers (your HBA's).

This might change over time as more and more enterprise workloads get virtualized ..... but right now is a bit challenging what you want to achieve.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
Reply
0 Kudos
chrwei
Enthusiast
Enthusiast

I'm not looking for RR or any type of failover. I want to put 4 HBAs in my ESX server and stream 4x more data, over all 4 paths, from a single VM, to a single LUN, all at the same time.

the other thing you are missing is that you are talking about taking a stream, splitting it into 4 streams and sending them through 4 "tubes" and then stitching them back together. Your switches don't care, and your SAN may have a DSP or fast enough CPU to do this, but your HBA's probably don't and I'm not so sure I'd like the ESX console doing all that work. It's also likely to result in some delays while dropped packets get resent and could even perform slower than a single pipe on occasion.

if you want a faster connection, get a faster connection.

Reply
0 Kudos