VMware Cloud Community
fadfo
Contributor
Contributor
Jump to solution

Active/Active Nic Teaming

I have two 6509 cisco switch

I configured the two physical network adapters on ESX 4.1 as Active/Active and they are connected one NIC in each switch.

Doing this way I have more throughput?

How does ESX handle the traffic in Active/Active NIC Teaming?

Reply
0 Kudos
1 Solution

Accepted Solutions
rickardnobel
Champion
Champion
Jump to solution

As I understand it, (again that caveat, not a network guy) you can't span a vswitch across switch ports without a violation of spanning tree which disables all but one port UNLESS your switches support cross stack link aggregation. Someone correct me if I am wrong, please.

You are correct. There is some confusion of the terms only.

There is a network standard called 802.3ad (workgroup name) or 802.1AX (real name now, but the workgroup name sticks around.) This explains how two units can bundle several links together and logically treat them as one. On Cisco switches this is called an "Etherchannel", on HP Procurve switches this is called a "Trunk" and on most servers this is called a "NIC team". On ESX this is called "IP hash".

Both sides has to be configured for this, otherwise the same MAC address will appear at the same time on different ports and this would cause various serious problems, including Spanning Tree trouble, but also other things.

The 802.3ad has two modes, static or dynamic configuration of the link aggregation. The static is quite simple, both sides must just be manually configured. When using dynamic a certain protocol called LACP is used to negotiate this. Cisco also has its own propertiary PaGP protocol that does the same thing.

ESX/ESXi hosts only has support for static link aggregation, so this must also be set on the physical switch side.

The 802.3ad/802.1AX does not specify HOW the loads should be balanced across the available links. That is up to the different units to decide and it could very well be different on both sides of this without problems.

Many switches (both physical and also the Vmware vSwitch) uses a load balancing method that includes the IP adresses in the source and destionation to choose an outgoing NIC. This works well and is quite simple to do, but limits a session between any two IPs to ONE single NIC. If link failure should occur it will switch to a new one.

Some switches has the possibiliy to use more advanced algorithms for this, for example including both src/dst IP address and also TCP port numbers. This is more "expensive" to do as the packet has to be inspected deeper and also causes more CPU overhead, but will allow the load between two IP hosts to use several NICs. This is not supported on the standard vSwitch or the Distributed Switch, but on the 3rd party Nexus 1000v.

However, all methods is link aggregation, just differences in how to place the frames onto the NICs.

As for using two different physical switches the standard does not allow this. The aggregated links must start and end between only two units. However, several switch vendors (e.g. Cisco and HP) has made up their own solutions for this, called Stacked Switch Etherchannel or Distributed Trunking for example.

IF you have such switches with this feature and if configured correctly on the switch side you could set up for example four VMNICs with "IP hash" on your ESX/ESXi and connect these to the two physical switches. This would probably be the best combination of fault tolerance and load balancing.

My VMware blog: www.rickardnobel.se

View solution in original post

Reply
0 Kudos
7 Replies
Texiwill
Leadership
Leadership
Jump to solution

Hello,

Moved to the vSphere NEtwork Forum.

NIC Teaming does not Aggregate bandwidth but it will load balance the VMs across each link based on vSwitch Portgroup Port ID of the VM by default. There are a number of load balancing options and this tied with NetIOC can increase apparent bandwidth.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, 2010

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
fadfo
Contributor
Contributor
Jump to solution

Ok.

As I have load balancing only and the uplinks are in different switches there is no way to configure link aggregation between my ESX 4.1 and my Cisco 3020 (Blade Switch).

I can just configure an aggregation from my 3020 to my core switch 6509, isn 't?

Reply
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

The only way to configure link aggregation is a very expensive one.

You would have to license Enterprise Plus and the NEXUS 1000V addition.

André

EDIT: see http://www.vmware.com/products/vnetwork-distributed-switch/features.html

NickHorton
Contributor
Contributor
Jump to solution

Why would you have to have the Nexus 1000v? Link aggregation can work without the 1000v. Link aggregation across switches would require switches that support cross stack link aggregation however. Are you saying the 1000v supports link aggregation across cisco switches that don't support cross stack link aggregation?

Reply
0 Kudos
fadfo
Contributor
Contributor
Jump to solution

I'm configuring my ESX's servers with NIC Teaming Route based on the originating Virtual Port ID.

Each network card connected to different Cisco 3020 Switches and on these switches I did Etherchannel to connect to the Two Core Switches 6509.

Reply
0 Kudos
NickHorton
Contributor
Contributor
Jump to solution

From my understanding, when you configure a vswitch with "route based on originating virtual port ID" only one physical uplink is used. If you configure a switch with a link aggregation port channel, and set the load balancing algorithim to say src-dest-ip, configure the vswitch for IP hash load balancing and it will actually use multiple uplinks.

As I understand it, (again that caveat, not a network guy) you can't span a vswitch across switch ports without a violation of spanning tree which disables all but one port UNLESS your switches support cross stack link aggregation. Someone correct me if I am wrong, please. I'd like to better understand this for an upcoming rollout in our own office.

Reply
0 Kudos
rickardnobel
Champion
Champion
Jump to solution

As I understand it, (again that caveat, not a network guy) you can't span a vswitch across switch ports without a violation of spanning tree which disables all but one port UNLESS your switches support cross stack link aggregation. Someone correct me if I am wrong, please.

You are correct. There is some confusion of the terms only.

There is a network standard called 802.3ad (workgroup name) or 802.1AX (real name now, but the workgroup name sticks around.) This explains how two units can bundle several links together and logically treat them as one. On Cisco switches this is called an "Etherchannel", on HP Procurve switches this is called a "Trunk" and on most servers this is called a "NIC team". On ESX this is called "IP hash".

Both sides has to be configured for this, otherwise the same MAC address will appear at the same time on different ports and this would cause various serious problems, including Spanning Tree trouble, but also other things.

The 802.3ad has two modes, static or dynamic configuration of the link aggregation. The static is quite simple, both sides must just be manually configured. When using dynamic a certain protocol called LACP is used to negotiate this. Cisco also has its own propertiary PaGP protocol that does the same thing.

ESX/ESXi hosts only has support for static link aggregation, so this must also be set on the physical switch side.

The 802.3ad/802.1AX does not specify HOW the loads should be balanced across the available links. That is up to the different units to decide and it could very well be different on both sides of this without problems.

Many switches (both physical and also the Vmware vSwitch) uses a load balancing method that includes the IP adresses in the source and destionation to choose an outgoing NIC. This works well and is quite simple to do, but limits a session between any two IPs to ONE single NIC. If link failure should occur it will switch to a new one.

Some switches has the possibiliy to use more advanced algorithms for this, for example including both src/dst IP address and also TCP port numbers. This is more "expensive" to do as the packet has to be inspected deeper and also causes more CPU overhead, but will allow the load between two IP hosts to use several NICs. This is not supported on the standard vSwitch or the Distributed Switch, but on the 3rd party Nexus 1000v.

However, all methods is link aggregation, just differences in how to place the frames onto the NICs.

As for using two different physical switches the standard does not allow this. The aggregated links must start and end between only two units. However, several switch vendors (e.g. Cisco and HP) has made up their own solutions for this, called Stacked Switch Etherchannel or Distributed Trunking for example.

IF you have such switches with this feature and if configured correctly on the switch side you could set up for example four VMNICs with "IP hash" on your ESX/ESXi and connect these to the two physical switches. This would probably be the best combination of fault tolerance and load balancing.

My VMware blog: www.rickardnobel.se
Reply
0 Kudos