So I am a network guy, trying to find what, from a physical network to ESXi host perspective is the "best practice" design... With a physical topology of a ESXi server uplinking, via 2 NIC's, to 2 upstream switches running a MLAG/VPC type technology. What is the recommended design here, to bind the 2 links from the server to the physical switches in a LACP? Or to leave them as individual trunks coming from the separate switches and let VMWare figure out the hashing (whither MAC or IP based Hashing)? I have found little definitive info out there on this topic and would appreciate some help and justification for suggestions if possible.
LACP is supported only by the VDS 5.1 (vSphere distributed switch). Otherwise, you'll need to use an EtherChannel (mode on) set to IP Hash.
I typically don't bother with a port channel to vSphere Hosts unless there is a specific workload that would benefit. Normally I leave the ports as trunks and set the vSphere teaming policy to "route based on physical NIC load" (assuming VDS) or "virtual port ID" (assuming no VDS).
even though if we use the LACP and IP hash, it is not 100% sure the esx will use both the nics, becuase you need to have different hashes for the source and destination IP. That is why the LBT is developed by VMware.
It is available in the ent+ license. In the ether channel or what ever aggregation type, the ESX dont know if the pNICS are congested or not. But in LBT , it will only move the netwrok traffic when the send or receive utilization on an uplink exceeds 75% of capacity over a 30 second period. that is load-based teaming (LBT) policy is traffic-load-aware and ensures physical NIC capacity in a NIC team is optimized.
so the bese load balancing and true one is LBT
refer the below for more info
Thanks everyone for your great answers! You have introduced me to a new (to me) feature!
LBT has spawned a few questions\concern's in my mind though...
Gkeerthy, thanks for the links, they were great! Per them, is it conceivable that a VM host could have it's flow moved every 30 seconds with LBT? If so, does that not alarm you? Also it looks like it is recommended to enable portfast\portfast trunk on the physical links... Per the link never actually going down, why is this being recommended? Is this meant to prevent the wait caused by STP convergence on vlans (and associated vDS Port profiles) moved from one vDS Uplink to another? If the physical switch port was already trunking down the vlans associated with the vDS Port Group on both vDS Uplink ports, then port fast seems like it wouldn't be necessary...
Also, if LBT is the preferred configuration approach, then why would VMWare implement and boast about now having LACP in 5.1? Per what you have told me, LBT seams to negate all advantages that LACP has to offer... ?
Thanks again for all your help!
Anyone have any input on my concern's with LBT as stated in the previous post? Also, as stated above, I would also be very interested in getting an explanation as to why VMWare has implemented LACP in 5.1 if, per LBT, it provides no distinct advantage?...
Also, as stated above, I would also be very interested in getting an explanation as to why VMWare has implemented LACP in 5.1 if, per LBT, it provides no distinct advantage?...
The reason for using LACP or static Link Aggregation (called "IP Hash" in vSphere) is for use cases where a VM needs more bandwidth than a single physical NIC port can give. With both default Port ID Nic teaming and with the LBT, available on the Distributed vSwitch, a single VM could never use more bandwidth than one vmnic.
With LACP/IP Hash it is possible to use the sum of all vmnics (physical ports on the physical network interface on ESXi) for a single VM if we have a good spread of IP clients communicating with the VM.
but doesnt lbt only uses one link until it gets up to 75% or more then uses the other on the same vswitch? and lacp will use both simultaneously?
what would be better on lbt and lacp?
4-8 1gb links
2, 10gb links?