VMware Cloud Community
Marcel1967
Enthusiast
Enthusiast
Jump to solution

vSphere 6.5: LACP or LBT for 4 x 10 GbE uplinks

I have read quite a few blogposts about configuration of ESXi uplinks. Many experts prefer LBT over LACP.

I am looking for some good arguments to use LBT over LACP in the following configuration :

- about 60 ESXi 6.5 hosts all with 4 x 10 GbE network adapters.

-Netapp NFS storage

-distributed switches

-single tenancy, all management done by same group of IT-admins

-no need for a single VM to consume over 10 Gb of bandwidth

-redundancy of network uplink required

-network traffic: VM lan, vMotion, HA, ESXI management, NFS datastores, iSCSI from guest OS

-keep it simple

I am wondering how well LBT is able to detect a link failure, cable error?

What about using host profiles and LACP. Works well?

What would be your preferred configuration?

One dvSwitch for all 4 nics, or 2 dvSwitches each having 2 nics? LACP or LBT?

Thanks for your help!

0 Kudos
1 Solution

Accepted Solutions
erikverbruggen
Hot Shot
Hot Shot
Jump to solution

LACP is not supported with Host Profiles, see documentation VMware vSphere 6.5 Documentation Library

Link failure detection works the same for LBT and LACP. The difference is that for LACP the physical uplinks appear as a single logical uplink, providing more bandwidth.

My preferred teaming policy is LBT because of the reduced complicity and the ability to distribute traffic according to load.

You could use 2 dvSwitches and separate management and workload (VM) traffic but this can also be accomplished with 1 dvSwitch and use explicit failover order.

Do you have any requirement for separation of network traffic or QoS/Traffic shaping? These requirements could be helpful when choosing between 1 or 2 dvSwitches.

View solution in original post

0 Kudos
3 Replies
erikverbruggen
Hot Shot
Hot Shot
Jump to solution

LACP is not supported with Host Profiles, see documentation VMware vSphere 6.5 Documentation Library

Link failure detection works the same for LBT and LACP. The difference is that for LACP the physical uplinks appear as a single logical uplink, providing more bandwidth.

My preferred teaming policy is LBT because of the reduced complicity and the ability to distribute traffic according to load.

You could use 2 dvSwitches and separate management and workload (VM) traffic but this can also be accomplished with 1 dvSwitch and use explicit failover order.

Do you have any requirement for separation of network traffic or QoS/Traffic shaping? These requirements could be helpful when choosing between 1 or 2 dvSwitches.

0 Kudos
Marcel1967
Enthusiast
Enthusiast
Jump to solution

There are no requirements for separation of network traffic. NIOC will do the trick on QoS.

What would be a good reason to have for example only NFS traffic on a second dvSwitch with 2 x 10 GbE while VM LAN, vMotion etc is one dvSwitch1 ?

0 Kudos
erikverbruggen
Hot Shot
Hot Shot
Jump to solution

You could separate system and workload traffic with 2 dvSwitches to make it easier to identify which traffic goes through which dvSwitch and you can apply NIOC settings without affecting the other traffic type. But this means that you will need to manage 2 dvSwitch instances which each its own NIOC settings.

0 Kudos