joshp
Enthusiast
Enthusiast

802.3ad Link Aggregation - 4 NICS

Jump to solution

I am working on a project which requires four physical NICs to be connected to two separate Cisco switches for redundancy. I will need access to a minimum of six networks across these four physical NIC ports using ESX 3.5.

Our plan is to use trunking and link aggregation (Etherchannel) on the Cisco switches. Essentially what we want to do is aggregate two physical NIC port trunks using Etherchannel to each of the two Cisco switches. We would enable routing based on IP hash on the single vSwitch with the four NICs teamed. We would use a separate vSwitch for vSwif0 and vmkernel port not part of the trunks.

Does this configuration look good? I am looking for some networking folks to comment on the configuration above. Thanks.

VCP 3, 4 www.vstable.com
0 Kudos
1 Solution

Accepted Solutions
mvoss18
Hot Shot
Hot Shot

Here is a discussion that asked the same question.

http://communities.vmware.com/docs/DOC-5000

Here is an insert from Kevin Cline's blog.

http://kensvirtualreality.wordpress.com/2009/04/05/the-great-vswitch-debate%E2%80%93part-3/

A technical requirement for using IP Hash is that your physical switch must support 802.3ad static link aggregation. Frequently, this means that you have to connect all the pNICs in the vSwitch to the same pSwitch. Some high-end switches support aggregated links across pSwitches, but many do not. Check with your switch vendor to find out. If you do have to terminate all pNICs into a single pSwitch, you have introduced a single point of failure into your architecture.

If your vendor supports aggregated links across pSwitches, great. Otherwise you may need to consider another approach. You can have redundancy across the switches without doing etherchannel and just routing based on originating port ID. So you could create a vSwitch with 4 NICs (two on each pSwitch) without doing a trunk and let ESX do basic load balancing for outgoing traffic. In my experience you're much more likely to be CPU or memory bound than network bound.

View solution in original post

0 Kudos
10 Replies
mvoss18
Hot Shot
Hot Shot

I don't believe you can have two different etherchannel groups teamed at the vSwitch level. The IP Hash algorithm is going to expect that the 4 NICs are part of one etherchannel group. You could have all 4 NICs connected to a vSwitch with each etherchannel pair assigned as active adapters to a different port group.

For example, vSwitch 1 has all 4 uplinks assigned. However, load balancing isn't configured as the vSwitch level. Instead, two different port groups are created. For each port group, set a different pair of adapters as active and to route based on IP Hash.

Another option would be to enable one etherchannel pair at the vSwitch level and set the other etherchannel pair as Unused. If you wanted to take the physical switch with the active pair offline, set those adapters to Unused and set the other pair as Active.

Standby adapters would work for switch failures if had one active pair and one standby pair. However standby tries to failover adapters one at a time. If one of the adapters in your active pair failed, it would activate one of the adapters in the other pair. Then ESX would be trying to route based on IP hash across two different etherchannel groups.

0 Kudos
joshp
Enthusiast
Enthusiast

mvoss, thats incredibly helpful information. If you had to design a solution that required redundancy and maximize throughput how would you recommend I setup the ESX enviroment. My requirements are that I spread the traffic across two switches for complete redundancy. Additionally, I must utilize trunking. Thanks in advance.

VCP 3, 4 www.vstable.com
0 Kudos
mvoss18
Hot Shot
Hot Shot

Here is a discussion that asked the same question.

http://communities.vmware.com/docs/DOC-5000

Here is an insert from Kevin Cline's blog.

http://kensvirtualreality.wordpress.com/2009/04/05/the-great-vswitch-debate%E2%80%93part-3/

A technical requirement for using IP Hash is that your physical switch must support 802.3ad static link aggregation. Frequently, this means that you have to connect all the pNICs in the vSwitch to the same pSwitch. Some high-end switches support aggregated links across pSwitches, but many do not. Check with your switch vendor to find out. If you do have to terminate all pNICs into a single pSwitch, you have introduced a single point of failure into your architecture.

If your vendor supports aggregated links across pSwitches, great. Otherwise you may need to consider another approach. You can have redundancy across the switches without doing etherchannel and just routing based on originating port ID. So you could create a vSwitch with 4 NICs (two on each pSwitch) without doing a trunk and let ESX do basic load balancing for outgoing traffic. In my experience you're much more likely to be CPU or memory bound than network bound.

View solution in original post

0 Kudos
oschistad
Enthusiast
Enthusiast

Just a quick comment on the ongoing thread; Normally using EtherChannel / IP hash isn't required and may not in fact deliver what you are looking for.

The default loadbalancing algorithm for a new vSwitch is to use port-based link selection. This means that each VM connected to a vSwitch gets assigned to one of the available physical NICs connected to that vSwitch. The algorithm to select which pNIC looks simply at the virtual port ID of the VM, and the net result is that the VMs connected to that switch are equally distributed across all pNICs. This obviously limits the maximum capacity of each VM to the media speed of one physical NIC.

The IP hash algorithm, on the other hand, uses a hash of the source and destination IP to select which physical NIC to send a given packet through. For a single VM communicating with multiple servers this will enable that VM to drive more than one physical NIC, BUT: Each session is still limited to the capacity of one NIC because every packet between two specific nodes will always have the same HASH and hence always hit the same physical NIC.

So; IP hash introduces a lot more complexity and switch management, and will still limit you to a single gigabit interface per session - the only way to drive more than one gigabit per VM is in those cases where there are multiple high-bandwidth connections going on with different remote clients (or servers).

If what you wanted to achieve was a highly available virtual network, with load-balancing across multiple VMs and an aggregated bandwitch equal to the physical capacity of all connected NICs then you may find that the default algorithm of portbased loadbalancing works just fine and is easy to manage. Only if you have a single VM which requires more than one gigabit bandwidth, and which communicates with multiple destinations, will IP hash and etherchannel make sense.

joshp
Enthusiast
Enthusiast

The issue I have is that I must also use trunking because there are more VLANs than physical network ports. I could have a scenario where I had two network connections from Switch1 and two network connections from Switch2 going to vSwitch1 all four as trunks. Would the default load balancing algorithm also load balance across this trunk configuration? The question we are trying to also answer is in regards to inbound traffic. Of the four physical trunks which trunk port on the switch will be used for inbound traffic - traffic into the ESX host? How do the physical switches spread the load across the trunks? Would we have a scenario where one trunk would always be used in effect limiting the inbound bandwidth across all vm's on vSwitch1 to 1GB (the physical port speed of one pNic)?

I appreciate the responses. It helps Smiley Happy

VCP 3, 4 www.vstable.com
0 Kudos
oschistad
Enthusiast
Enthusiast

I think at this point it would be helpful to break the subject down into the components involved.

First there is the port group layer, which is where you connect the virtual machines. A port group has a label and an optional vlan ID. By assigning a vlan number to a portgroup you are in effect telling the ESX server to use 802.1q tagging for all packets originating from this port group, and to evaluate incoming 802.1q packets with this vlan ID against the MAC addresses of the VMs connected to the port group.

In other words, to support multiple vlans per physical NIC you configure the physical ports the ESX is connected to as an 802.1q ("trunk" port in Cisco parlance, or 'tagged' in most other vendors). You then configure your port groups with the associated vlan IDs, or "none" for the native vlan.

Second, there is the load balacing algorithm. By adding more than one NIC to a vSwitch you have already told the ESX servers to load balance traffic. The default algorithm it uses is the port-based, as I mentioned in my previous post. More generally, with the exception of 802.3ad (or IP hash), a VM will be assigned a virtual port on one of the available physical uplinks. The ESX server will then send ARP notification packets on behalf of that VM's MAC address to notify the physical switch about the presence of that host. From that point on, all packets sent to that VM will arrive on the chosen NIC of that ESX server, until that VM gets assigned a different NIC for some reason (vmotion, etc).

Since the ESX will select physical NICs in a fashion which, statistically speaking, will result in an equal spread of VMs per physical NIC the net result will be that you get an aggregated bandwitch equal to the sum capacity of each connected NIC of a vSwitch.

Thirdly, there is the specific case of communication between VMs inside (and between ESX servers). A useful fact to know here is that the ESX server only forwards packets internally locally on the same portgroup. In other words, traffic between two VMs on the same port group on one ESX server can communicate at the speed of the memory bus of the ESX server, while traffic between port groups, vSwitches or (obviously) other ESX servers will always be sent to the physical switches using the NIC whic the loadbalancer selected for that particular VM.

So in the scenario you describe, you could in fact use a single vSwitch with four uplinks and use VLAN ID's on the port groups to distinguish between the different VLANs.

joshp
Enthusiast
Enthusiast

Okay I think I am getting to the issue at hand. I actually have used trunking and vSwitch VLAN tagging for a couple years now so I am okay with the ESX configuration. I am specifically concerned with "inbound" traffic to ESX over the four trunks. An ARP notification will be sent and I understand how that would work on an ESX configuration where trunking was not being used because the switch maintains a CAM table that says a MAC is connected via X port on the switch. But, does the Physical Switch also maintain CAM entries for MACs connected via a Trunk? I though that only Access Ports were dynamically defined in the CAM table?

So trying to narrow this down further, if vm1 boots up and ESX selects pNic1 as its network connection and pNic1 is a trunk; will the physical switch register in the CAM that vm1 is actually located on the trunk connected to pNic1? And when vm2 boots and happens to select pNic2 as its network connection and pNic2 is a trunk, will traffic inbound to vm2 use the trunk connected to pNic2 (or would it randomly select a trunk)?

Really appreciating the responses here.

VCP 3, 4 www.vstable.com
0 Kudos
oschistad
Enthusiast
Enthusiast

No, the mac-address-table is maintained per vlan and per "trunk" port as well. This is how switches are able to forward traffic across trunks without actually flooding all packets on the trunk ports - there is a list of "remote" addresses for each trunk port. When an unknown destination is received, that packet is flooded to all ports participating in the VLAN, including trunk ports, and when the reply packet is sent by the receiver (typically the responder to the "arp who-has" message) the switch makes a note of where that mac address was seen so it can avoid flooding futher packets in that conversation.

So yes - The physical switch will know where to send those packets and the ESX will make sure that the switch gets nofified of any changes. Yes it really is that simple to configure a highly available and aggregated uplink with VMware ESX server Smiley Happy

0 Kudos
joshp
Enthusiast
Enthusiast

I really wish I could assign point to multiple users because certainly many deserve points here. It seems clear to me based on the posts that with ESXs default load balancing mechanism (originating virtual port ID) using four trunk ports would provide basic inbound and outbound balancing. ESX would select which port to place a vm when it boots, the MAC for that VM would get registered for that physical connections trunk (CAM table); therefore, inbound traffic would know that a vms MAC is located via a specific trunk and inbound traffic would travel via that specific trunk.

So using Ether-channel would only be beneficial if we were trying to obtain greater than 1GB speeds for a vm (or group of vms). Without Ether-channel we will only get a maximum of 1GB throughput at any given time (because the physical NICs are 1GB).

Thanks for everyones help.

VCP 3, 4 www.vstable.com
0 Kudos
oschistad
Enthusiast
Enthusiast

Yes that is exactly correct on all points Smiley Happy

0 Kudos