I have read quite a few blogposts about configuration of ESXi uplinks. Many experts prefer LBT over LACP.
I am looking for some good arguments to use LBT over LACP in the following configuration :
- about 60 ESXi 6.5 hosts all with 4 x 10 GbE network adapters.
-Netapp NFS storage
-distributed switches
-single tenancy, all management done by same group of IT-admins
-no need for a single VM to consume over 10 Gb of bandwidth
-redundancy of network uplink required
-network traffic: VM lan, vMotion, HA, ESXI management, NFS datastores, iSCSI from guest OS
-keep it simple
I am wondering how well LBT is able to detect a link failure, cable error?
What about using host profiles and LACP. Works well?
What would be your preferred configuration?
One dvSwitch for all 4 nics, or 2 dvSwitches each having 2 nics? LACP or LBT?
Thanks for your help!
LACP is not supported with Host Profiles, see documentation VMware vSphere 6.5 Documentation Library
Link failure detection works the same for LBT and LACP. The difference is that for LACP the physical uplinks appear as a single logical uplink, providing more bandwidth.
My preferred teaming policy is LBT because of the reduced complicity and the ability to distribute traffic according to load.
You could use 2 dvSwitches and separate management and workload (VM) traffic but this can also be accomplished with 1 dvSwitch and use explicit failover order.
Do you have any requirement for separation of network traffic or QoS/Traffic shaping? These requirements could be helpful when choosing between 1 or 2 dvSwitches.
LACP is not supported with Host Profiles, see documentation VMware vSphere 6.5 Documentation Library
Link failure detection works the same for LBT and LACP. The difference is that for LACP the physical uplinks appear as a single logical uplink, providing more bandwidth.
My preferred teaming policy is LBT because of the reduced complicity and the ability to distribute traffic according to load.
You could use 2 dvSwitches and separate management and workload (VM) traffic but this can also be accomplished with 1 dvSwitch and use explicit failover order.
Do you have any requirement for separation of network traffic or QoS/Traffic shaping? These requirements could be helpful when choosing between 1 or 2 dvSwitches.
There are no requirements for separation of network traffic. NIOC will do the trick on QoS.
What would be a good reason to have for example only NFS traffic on a second dvSwitch with 2 x 10 GbE while VM LAN, vMotion etc is one dvSwitch1 ?
You could separate system and workload traffic with 2 dvSwitches to make it easier to identify which traffic goes through which dvSwitch and you can apply NIOC settings without affecting the other traffic type. But this means that you will need to manage 2 dvSwitch instances which each its own NIOC settings.