9 Replies Latest reply on Jul 23, 2015 11:19 AM by tdubb123

    LACP or no LACP, that is the question

    eyeofthebeholder Lurker

      So I am a network guy, trying to find what, from a physical network to ESXi host perspective is the "best practice" design... With a physical topology of a ESXi server uplinking, via 2 NIC's, to 2 upstream switches running a MLAG/VPC type technology. What is the recommended design here, to bind the 2 links from the server to the physical switches in a LACP? Or to leave them as individual trunks coming from the separate switches and let VMWare figure out the hashing (whither MAC or IP based Hashing)? I have found little definitive info out there on this topic and would appreciate some help and justification for suggestions if possible.

       

      Thanks much!

        • 1. Re: LACP or no LACP, that is the question
          Chris Wahl Master

          LACP is supported only by the VDS 5.1 (vSphere distributed switch). Otherwise, you'll need to use an EtherChannel (mode on) set to IP Hash.

           

          I typically don't bother with a port channel to vSphere Hosts unless there is a specific workload that would benefit. Normally I leave the ports as trunks and set the vSphere teaming policy to "route based on physical NIC load" (assuming VDS) or "virtual port ID" (assuming no VDS).

          VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
          • 2. Re: LACP or no LACP, that is the question
            Gopinath Keerthyrajan Expert

            even though if we use the LACP and IP hash, it is not 100% sure the esx will use both the nics, becuase you need to have different hashes for the source and destination IP. That is why the LBT is developed by VMware.

            It is available in the ent+ license. In the ether channel or what ever aggregation type, the ESX dont know if the pNICS are congested or not. But in LBT , it will only move the netwrok traffic when the send or receive utilization on  an uplink exceeds 75% of capacity over a 30 second period. that is load-based teaming (LBT) policy is traffic-load-aware and ensures physical NIC capacity in a NIC team is optimized.

            so the bese load balancing and true one is LBT

             

            refer the below for more info

            http://frankdenneman.nl/2011/02/24/ip-hash-versus-lbt/

            http://blogs.vmware.com/performance/2010/12/vmware-load-based-teaming-lbt-performance.html

            • 3. Re: LACP or no LACP, that is the question
              mcowger Champion

              I tend to agree with Chris.  Unless I have a specific need for it, I just dont see the value in the extra complexity.

              • 4. Re: LACP or no LACP, that is the question
                eyeofthebeholder Lurker

                Thanks everyone for your great answers! You have introduced me to a new (to me) feature!

                 

                LBT has spawned a few questions\concern's in my mind though...

                 

                Gkeerthy, thanks for the links, they were great! Per them, is it conceivable that a VM host could have it's flow moved every 30 seconds with LBT? If so, does that not alarm you? Also it looks like it is recommended to enable portfast\portfast trunk on the physical links... Per the link never actually going down, why is this being recommended? Is this meant to prevent the wait caused by STP convergence on vlans (and associated vDS Port profiles) moved from one vDS Uplink to another? If the physical switch port was already trunking down the vlans associated with the vDS Port Group on both vDS Uplink ports, then port fast seems like it wouldn't be necessary...

                 

                Also, if LBT is the preferred configuration approach, then why would VMWare implement and boast about now having LACP in 5.1? Per what you have told me, LBT seams to negate all advantages that LACP has to offer... ?

                 

                Thanks again for all your help!

                • 5. Re: LACP or no LACP, that is the question
                  eyeofthebeholder Lurker

                  Anyone have any input on my concern's with LBT as stated in the previous post? Also, as stated above, I would also be very interested in getting an explanation as to why VMWare has implemented LACP in 5.1 if, per LBT, it provides no distinct advantage?...

                   

                  Thanks.

                  • 6. Re: LACP or no LACP, that is the question
                    rickardnobel Virtuoso

                    eyeofthebeholder wrote:

                     

                    Also, as stated above, I would also be very interested in getting an explanation as to why VMWare has implemented LACP in 5.1 if, per LBT, it provides no distinct advantage?...

                     

                    The reason for using LACP or static Link Aggregation (called "IP Hash" in vSphere) is for use cases where a VM needs more bandwidth than a single physical NIC port can give. With both default Port ID Nic teaming and with the LBT, available on the Distributed vSwitch, a single VM could never use more bandwidth than one vmnic.

                     

                    With LACP/IP Hash it is possible to use the sum of all vmnics (physical ports on the physical network interface on ESXi) for a single VM if we have a good spread of IP clients communicating with the VM.

                    • 7. Re: LACP or no LACP, that is the question
                      FlbayIT Novice

                      I know this is old but this helped me out alot. Does any changes need to be made to the VM at all?

                      • 8. Re: LACP or no LACP, that is the question
                        rickardnobel Virtuoso

                        FlbayIT wrote:

                         

                        Does any changes need to be made to the VM at all?

                         

                        No, you do not need to do any changes or configuration at the VM level.

                        • 9. Re: LACP or no LACP, that is the question
                          tdubb123 Master

                          hi

                           

                          but doesnt lbt only uses one link until it gets up to 75% or more then uses the other on the same vswitch? and lacp will use both simultaneously?

                           

                          what would be better on lbt and lacp?

                           

                          4-8 1gb links

                           

                          2, 10gb links?