Can I combine two etherchannel "pipes" onto one vswitch?

Can I combine two etherchannel "pipes" onto one vswitch?

I am planning to have two dual port Intel PCI NICs in an ESX Server. The server only has 4 PCI slots, and the other two slots will be used by single port HBA adapters for HA reasons.

Initially I was just going to have port1 from NIC1 plugged into physical switch1, port2 from NIC2 plugged into physical switch2 and have both ports on the one vswitch with IP HASH based load balancing. Easy and simple with HA in place on the physical side of the network. I would just use External Switch Tagging (EST) so no special config required on ESX.

However someone mentioned why not use etherchannel for a bigger "pipe".

My question has two parts:

1. If I have port1 from NIC1 plus port1 from NIC2 both plugged into physical switch1 and etherchannel them together, and then I have port2 from NIC1 plus port2 from NIC2 both plugged into physical switch2 and etherchannel them together, can I put all this (effectively just two big "pipes") on the one vswitch?

2. Can I then use several VLANS on these ethernal channel "pipes"?

I've never used etherchannel before.

Has anyone actually done this in practice, or know of some official doco that explains how this is both possible and supported with VMware ESX Server?

Thanks in advance.

-


There is some really good information about this topic in this discussion: http://communities.vmware.com/message/902563#902563

-


Yes you can - you would create a Nic Team on a virtual switch - check out the networking section of

-


The link http://communities.vmware.com/message/902563#902563 simply takes me to a VMware Outlook Web Access page (and my logon for this forum does not work on that page).

Are you able to provide a different link to the same info?

Thanks.

-


Yes I have read (and again just now re-read) the Advanced Networking chapter of vi3_35_25_3_server_config.pdf. I understand it is really only outlining the basics of what can be done.

However it seems to imply you can only create one team on a vswitch or port group. I understand that scenario, but the scenario I'm asking about has two separate teams on the one vswitch or port group. By that I mean, I only want each VM to have one NIC. If I didn't explain the scenario feel free to ask me to elaborate.

I still don't know if this is possible. It would be great to hear from someone who has done this in the past, if indeed anyone has.

Also, that doc makes no mention, at least not that I could see, if you could etherchannel NICs together and then VLAN ontop of that. It was suggested to me that if you etherchannel NICs together then you can only use them on one VLAN due to the logic on the physical switch side needed to maintain the etherchannel on those ports.

Looking forward to replies from anyone that knows networking inside out.

Thanks.

-


I've copied it here for you to read::8}

I would like to clear up some misconceptions surrounding link aggregation and ESX. One of my colleagues, Dan Whitman, wrote up this very nice summary of our stance regarding link aggregation. Hooray for Dan! Seriously, thanks Dan for the clear and concise summary.

Unfortunately, we don't document this very well but we do in fact support 802.3ad in specific configurations. We have to be careful when we refer to LACP generically because there are two modes which can be employed: Static and Dynamic and both are in the 802.3ad IEEE standard. While the whole premise of the LACP standard is to enable dynamic "routing" of Layer 2 traffic, there is a facility built into the standard to allow the forcing of static paths. Interestingly enough, the little documentation we have says that while we support 802.3ad, we do not support LACP. The appropriate statement should be that ESX doesn't support the dynamic aspect of the standard but does support static mode in the 802.3ad specification.

I have worked through this conundrum with several customers and the configuration below has been proven to work in single switch and multi-switch environments. Note that while spanning-tree is not needed simply because there is no way to link vSwitches together (without doing extreme networking in the VMs), enabling spanning-tree is the only way I've been able to find that allows the enablement of portfast which trims off 30secs of downtime during port failovers.

At the end of the day, ESX 3.x does support 802.3ad and LACP in STATIC configurations. Below are the configurations of both the Cisco switch (6500 in this case) and peer vSwitch in ESX to support Link Aggregation Control Protocol (802.3ad) and port trunking/grouping (Cisco Fast EtherChannel - FEC).

Supporting links

Cisco switch example commands to group 2 physical ports in 1 trunk group

!

interface Port-channel10

description ESX Server 1

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

spanning-tree portfast trunk

!

interface GigabitEthernet3/1

description ESX Server 1 NIC vmnic0

speed 1000

duplex full

switchport

switchport mode trunk

spanning-tree portfast trunk

channel-group 10 mode on

!

interface GigabitEthernet3/2

description ESX Server 1 NIC vmnic1

speed 1000

duplex full

switchport

switchport mode trunk

spanning-tree portfast trunk

channel-group 10 mode on

vSwitch example screenshot showing the necessary configuration to accommodate the Cisco port configuration above

dan whitman - system engineer - air force team - vmware

-


Hi cpqarray,

Thanks for the explanation about the static and dynamic aspects of 802.3ad (etherchannel). I was not aware of that before.

I have downloaded and read the virtual_networking_concepts.pdf document, along with the info you provided, and although it is good to know about the simple scenario of just one etherchannel team going to a single port group on a vswitch, which seems to be the only example in any doco I have read or any tech I've conversed with, it still doesn't answer my initial question, which is:

Can I put two separate etherchannel teams (the reason for two etherchannel teams is team1 goes to physical switch1 and team two goes to physical switch1 - only because I need physical HA and I read some VMware doco that says you should not etherchannel across different physical switches, or maybe it said it's just not possible) onto the one port group on a vswitch inside ESX Server?

Looking forward to your reply.

PS. The virtual_networking_concepts.pdf doco didn't make it clear whether or not I can still use multiple VLANs across an etherchannel team - do you know if I can?

-


This thread should not be in the VCP community. Moving to appropriate product community.




---

Badsah Mukherji

Senior Community Manager, VMware Communities

-


Hi,

You can not place two NIC teams which are on separate trunks on the same vswitch portgroup they will need to be all on the same trunk/team or all separate 802.1q vlan ports on the same team.

Yes you can have multiple VLAN's on the FEC trunk but I would suggest you change the native vlan to something other than 1 if you have not already.

Just to clarify 802.3ad Link Aggregation is not the same as FEC (cisco Fast Ether Channel)

802.3ad is an IEEE protocol standard and FEC is a cisco proprietary protocol. Cisco can do both of them.

Do you really need the trunk?

If none of your VM's have more than a single 1G vNIC per portgroup then it is not required and is more complex.

You can cross separate physical switches with a trunk the above example will work

This is a great helpfull ppt

Message was edited by: mike.laspina - word omitted - portgroup and added one more answer and link.

-


Hi Mike,

Thanks for clarifying that. So it appears it's a no go. The main idea for the 802.3ad Link Aggregation was simply to give the VMs (which only have one vNIC each) more bandwidth. However I'm also required to provide multiple paths across different physical switches - so it does appear I cannot do both. Darn. Looks like it will have to be just one port per physical switch, in which case I may as well stick with the default port based load balancing.

You mentioned I can trunk across different physical switches - so for that I would plan to use one port per physical switch (different ports than those used for the VM network) and I would have 3 VLANS on this vswitch (vmkernel vmotion, service console and heartbeat for our Microsoft Clusters). We only plan to use FC SAN (no iSCSI or NFS) so no need for two separate vmkernel networks (vmotion network separate from iSCSI network).

So can you confirm ESX 3.5 does not support etherchannel, but it does support 802.3ad Link Aggregation?

Also, are you aware of anyone who has used two physical switches for the VM network, and yet has also teamed NICs together for a larger pipe? If not, does that mean that every instance of NICs teamed for a single large pipe has always been with just one physical switch, meaning they have a central point of failure? That doesn't strike me as best practice, unless I'm missing part of the picture somewhere...

Looking forward to your reply.

Thanks.

-


You can achieve what you are looking for by using 802.1q port trunking and NIC teaming and you will have the 2G bandwidth over the team.

2 Adapters for VM traffic teamed across the two physical switches

2 Adapters for SC VMotion teamed across the two physical switches

It is possible to do this with 802.3ad but it's a complexity that would just give you difficult trouble shooting and you don't really need it.

I have this type of config on HP Procurve, just more of and at a larger scale and I have some 802.3ad static trunk configurations working as well.

I have 802.3ad trunks across the switches so there are no single points of failure.

ESX 3.5 works on static FEC and 802.3ad configurations it does support(use) LACP and PAgP dynamic protocols on the teamed NICs.

-


Hi Mike,

Please forgive my ignorance, but I thought if I use only 802.1q port trunking and NIC teaming (without any link aggregation or fast etherchannel) then I will not have a single 2G bandwidth over the team, but rather I'll have two separate 1G bandwidths, ie the most bandwidth any one machine could ever use at any point in time is just 1G - or is that not correct?

Thanks in advance.

-


Hi Mike,

Please forgive my ignorance, but I thought if I use only 802.1q port trunking and NIC teaming (without any link aggregation or fast etherchannel) then I will not have a single 2G bandwidth over the team, but rather I'll have two separate 1G bandwidths, ie the most bandwidth any one machine could ever use at any point in time is just 1G - or is that not correct?

Thanks in advance.

You are correct. The only way to possibly get more than one pNIC's worth of bandwidth out of a single VM is to use IP Hash as your load balancing mechanism and have your pSwitch configured for 803.3ad trunking. In that case, each IP conversation could - possibly - use a different pNIC. My question is do you really have a VM that is using one pNIC worth of network? I would be surprised if you do. Also, consider your pSwitch infrastructure - what is the connectivity between your pSwitches? What is the utilization on those links? I doubt their saturated. If your ISL is not pushing significantly more than one pNIC's bandwidth then you probably don't need to complicate your environment with link aggregation to your ESX hosts.

Ken Cline

Technical Director, Virtualization

Wells Landers

VMware Communities User Moderator

-


With the port ID policy the load balancer with try to spread the VM across the available adapters and this will allow you to use the 2G sum over multiple VM's. A single VM will only get a maximum of 1G (More like 500MB in the real world).

-


Hi Mike,

Thanks for the explanations. I like to know my options so I can make an informed decision.

One more question in this Load Balancing (LB) area:

From what I understand when I have two NICs trunked together on the one vswitch, if I use port based LB then a VM will choose a NIC when it powers on, and will stay with that NIC till it powers off or is moved to another host. However, if I use IP based LB then each time a VM needs to talk to a machine external to the ESX host on which it lives, it could choose either of those two trunked NICs for that particular conversation, and only stay with that NIC for that conversation. In other words the VM has the capability to choose either NIC of those two trunked NICs each time it starts a new conversation. This sounds like much more even use of the two NICs.

Is this correct? If not, can you please explain why?

Thanks.

-


Yes that is correct, portID policy will always use the same physical path unless a failover event occurs. The use of MAC and IP hash are not recommended unless you configure 802.3ad static based trunking on the physical switches. Keep in mind that hashing will have additional CPU load on the ESX server, but it is usually not an issue. As well hashing allows you to control outgoing traffic but not incoming. 802.3ad over to separate switches will be more complex and is possible as I have indicated earlier. You would need to carefully configure the policies to control portgroups in a manner that does not allow it to split across the physical switches. I have done this and it works. Issues on a portgroup will occur during a failed state if you do not prevent the path split. If one switch goes bad due to reasons other that complete failure is when you have the big issues.

NB. Awarding points is like saying thanks.

Here is some good info on it.

-


Smiley Happy Thanks for the note about awarding points. Correct answer and 10 points definitely going your way for your help on this thread.

Where can I find out what the little icons/symbols mean that are displayed next to most of the names of people that post on this forum? I noticed I have a flag and you have a crown Smiley Happy How many icons are there?

Thanks.

This document was generated from the following thread: 

Version history
Revision #:
1 of 1
Last update:
‎05-04-2008 07:25 AM
Updated by: