1 Reply Latest reply on Dec 22, 2004 9:32 AM by JMills

    VMware Ethernet link binding with Cisco EtherChannel and Trunking

    jaspain Enthusiast

      In our VMware ESX server, an HP Proliant DL380 G3, I have been looking for information about how to get VMware Ethernet link binding to work with Cisco EtherChannel and trunking. The information in the documentation, knowledge base, and forums is sparse and somewhat contradictory. I have gotten it to work, so I thought I would post the details here and ask for any comments that other users might have.

       

      First of all from the standpoint of VMware, creating an EtherChannel is as simple as creating or editing a virtual switch and assigning two or more outbound network adapters to it. Creating a trunk involves creating one or more port groups under a virtual switch and assigning VLAN IDs to them. This is all described beginning on page 215 of the ESX Server 2 Administration Guide.

       

      VMware uses the IEEE 802.1q protocol for VLAN tagging on a trunk. For EtherChannel VMware uses by default the source MAC address of the virtual machine to select the outgoing datalink. The load balancing mechanism can be changed to source IP address as described on page 371 of the administration guide. VMware does not support load balancing by destination MAC or IP address. VMware also does not support Cisco's Virtual Trunking Protocol (VTP) for dynamically controlling VLAN trunking on datalinks. Nor does VMware support Cisco's Port Aggregation Protocol (PAgP) or the IEEE 802.3ad Link Aggregation Control Protocol (LACP) for dynamically setting up EtherChannels.

       

      Given these facts, Cisco switches must be carefully configured to set up EtherChannel and/or trunking manually. We are using a Cisco Catalyst 6500 with a Sup2/MSFC2 module and Supervisor IOS software. The relevant EtherChannel and trunking configuration for two datalinks to the ESX server is as follows:

       

      interface Port-channel1

      description VMware ESX Adapter0 Network

      no ip address

      switchport

      switchport trunk encapsulation dot1q

      switchport trunk allowed vlan 2,3

      switchport mode trunk

      switchport nonegotiate

      !

      interface GigabitEthernet1/1

      description VMware ESX EtherChannel link 0

      no ip address

      switchport

      switchport trunk encapsulation dot1q

      switchport trunk allowed vlan 16,94

      switchport mode trunk

      switchport nonegotiate

      channel-group 1 mode on

      !

      interface GigabitEthernet1/2

      description VMware ESX EtherChannel link 1

      no ip address

      switchport

      switchport trunk encapsulation dot1q

      switchport trunk allowed vlan 2,3

      switchport mode trunk

      switchport nonegotiate

      channel-group 1 mode on

       

      First, with regard to EtherChannel, the configuration of the PortChannel virtual interface and the associated physical interfaces in the channel group has to be the same. Under the physical interfaces the channel group mode is set to on. This manually binds the interfaces into an EtherChannel and disables PAgP and LACP. By default Cisco load balances based on source IP address. This is probably good in most situations where traffic from the switch to the ESX server is coming from clients with essentially random IP addresses. The "port-channel load-balance" global configuration command can be used to change the load balancing method as needed.

       

      With regard to trunking, the configuration under the PortChannel and physical interfaces also has to be the same. The trunk encapsulation is set to 802.1q to match VMware. Limiting the allowed VLANs to those used by the virtual machines on the ESX server eliminates broadcast traffic from other VLANs that would otherwise be flooded over the trunk. In a trunk between Cisco switches, this would happen automatically with VTP pruning, but with VMware, VTP is not in use. Finally setting "switchport nonegotiate" turns off VTP and forces the interface into trunking mode.

       

      Further information is available in the Cisco configuration guides at http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/122sx/swcg/channel.htm for EtherChannel and http://www.cisco.com/univercd/cc/td/doc/product/lan/cat6000/122sx/swcg/layer2.htm for trunking.

        • 1. Re: VMware Ethernet link binding with Cisco EtherChannel and Trunking
          JMills Master

          EtherChannel itself is a Cisco brand name for a proprietary implementation of IEEE 802.3ad.

           

          VMware ESX Server does NIC Teaming (our term), and supports various flavors of load balancing and failover-only on a NIC Team... Only the 'out-ip' load balancing mode requires Cisco EtherChannel or[/b] IEEE 802.3ad support on the physical switches, in the form of IEEE 802.3ad LACP "static" mode, or Cisco EtherChannel PAgP "static" mode.

           

          One additional thing to consider for your configurations is that since a NIC Team cannot directly bridge traffic between two physical NICs, you should (in order of preference):

             

          • Disable IEEE 802.1d SpanningTree Protocol entirely for the physical switch ports feeding your ESX Server chassis.

             

          • Use IEEE 802.1w Rapid SpanningTree Protocol

             

          • Use IEEE 802.1d SpanningTree Protocol in "PortFast" mode