elihuj
Enthusiast
Enthusiast

LAG Benefits over IP Hash

Jump to solution

Our hosts have 2x10GbE uplinks port-channeled on our Cisco switches, and use Route based on IP hash for LB. What (if any) is the benefit to configuring LACP on the vDS for the uplinks over this method?

0 Kudos
1 Solution

Accepted Solutions
daphnissov
Immortal
Immortal

Generally, both in my experience and those of other experts over the years with whom I've consulted, the demand for using LAG/LACP is usually the following (ranked in order of most prevalent to least):

  1. Customer doesn't know how vSphere works and what features it has with networking.
  2. Ignorance that LAG/LACP is the only method to ensure concurrent link utilization and fail-over capabilities (linked with #1).
  3. Networking team thinks vSphere/ESXi is just like every other physical server / don't want to learn new features.
  4. Company standardization on LAG/LACP based on pre-virtualization technology.
  5. Existing vendor infrastructure/solution requires it (rare; archaic today)

Either #1-3 above is likely to be the reason for the mandate for LAG/LACP if it's coming from the networking team.

View solution in original post

0 Kudos
6 Replies
daphnissov
Immortal
Immortal

Firstly, your thread title suggests you're using Load-Based Teaming (LBT), but your post says otherwise. Route based on IP hash is a mechanism for use in a LAG whereas LBT does not rely on LAGs. Which are you using?

0 Kudos
elihuj
Enthusiast
Enthusiast

My mistake. I have updated the title to reflect.

Our uplinks are in a port channel, and we're using IP Hash as our balancing method. I am trying to determine if there is any benefit on going through the complexity of setting up LACP and LAG groups in addition to what we have configured now.

0 Kudos
daphnissov
Immortal
Immortal

I'm in the process of writing an article about this but don't have it ready yet. My general recommendation (and that of the experienced community), is to use load-based teaming if you're entitled to a vDS and to not use any form of LAG (be that static or LACP/dynamic). There are more pros than cons when it comes to this approach. LBT is simpler to set up, administer later, and troubleshoot because it requires no special switch-side configuration. It's also the only method that is truly able to achieve a balance of pNICs--something which neither bonding method can achieve. So when using vSphere with a vDS entitlement, the strong recommendation is to use LBT and no LAG.

0 Kudos
elihuj
Enthusiast
Enthusiast

That's the same conclusion that I've gotten while doing research on it. Would the push of bonding (static/LACP) from Networking be more from a legacy standpoint? Stick with what works kinda thing?

I would definitely be interested in that article once you are finished.

0 Kudos
daphnissov
Immortal
Immortal

Generally, both in my experience and those of other experts over the years with whom I've consulted, the demand for using LAG/LACP is usually the following (ranked in order of most prevalent to least):

  1. Customer doesn't know how vSphere works and what features it has with networking.
  2. Ignorance that LAG/LACP is the only method to ensure concurrent link utilization and fail-over capabilities (linked with #1).
  3. Networking team thinks vSphere/ESXi is just like every other physical server / don't want to learn new features.
  4. Company standardization on LAG/LACP based on pre-virtualization technology.
  5. Existing vendor infrastructure/solution requires it (rare; archaic today)

Either #1-3 above is likely to be the reason for the mandate for LAG/LACP if it's coming from the networking team.

View solution in original post

0 Kudos
elihuj
Enthusiast
Enthusiast

Good stuff, thank you for the clarification daphnissov​.

0 Kudos