VMware Cloud Community
Western0
Contributor
Contributor

Single Etherchannel group per ESXi Host?

I'm building an ESXi 4.1 Enterprise infrastructure with iSCSI storage and I'm considering a few different network alternatives. I'm using Cisco 3750 stacked switches. In the past I've created 1 etherchannel group for iSCSI and 1 etherchannel group for VM traffic per host. However 3750 switch stacks have a 12 etherchannel group maximum meaning that I won't be able to support very many hosts per switch stack (4 hosts since I'm burning etherchannel groups for a few other network uplinks).

Due to that limitation I'm considering bonding all NICs per ESX host (8x 1gbps) in a single etherchannel group and then VLANing out the different components. I still plan on mapping each vmkernel port (I'll have 3-4 of them) to specific pNics and leaving the rest of the NICs for VM traffic, Management traffic, and vMotion. If I do this I'll be able to support more hosts per 3750 stack.

There is a similar post below, although iSCSI was not being used:

http://communities.vmware.com/thread/280515

What do you guys think?

Tags (3)
0 Kudos
7 Replies
ThompsG
Virtuoso
Virtuoso

Hi,

Limit is 48 etherchannels on the Cisco 3750 if you are running IOS 12.2(25)SE or later:

The Catalyst 3750/3560 series switch can support up to eight compatibly configured Ethernet interfaces in an EtherChannel. The EtherChannel provides full-duplex bandwidth up to 800 Mbps (Fast EtherChannel) or 8 Gbps (Gigabit EtherChannel) between your switch and another switch or host. With Cisco IOS Software Release 12.2(20)SE and earlier, the number of EtherChannels has a limit of 12. With Cisco IOS Software Release 12.2(25)SE and later, the number of EtherChannels has a limit of 48.

From here: http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml#catalyst

Upgrade the IOS and don't worry about it Smiley Wink

Glen

Message was edited by: ThompsG edited for formatting and a link added

0 Kudos
beyondvm
Hot Shot
Hot Shot

Using one Etherchannel is not a good idea or best practice.  You should at least separate out storage and management/vMotion traffic in general practice, storage especially.

--- If you found any of my comments helpful please consider awarding points for "Correct" or "Helpful". Thanks!!! www.beyondvm.com
0 Kudos
Western0
Contributor
Contributor

Thank you both for your responses. I wasn't aware of the 48 port limit, that will definitely resolve my issues.

Beyondvm - I am aware of best-practices on isolating iSCSI traffic and agree with the idea. If an unlikely event where physical NICs are saturated data corruption could very well occur on the VMs. With that said I've been thinking about iSCSI best practices in VMware and Etherchannel and some of my knowledge currently conflicts...

When deploying iSCSI solutions I typically create an etherchannel group out of 4 NICs. On the VMware side I create 4 vmkernels and associate each one with a specific physical NIC. I got this idea from this fairly popular page. However it seems to me that assigning a specific vmkernel to a physical NIC in an etherchannel group would be self defeating because etherchannel will try to balance traffic based on IP Hash. On the SAN side it depends on the solution. I've done several deployments with HP P4000s (LeftHand) nodes which have their own load balancing system and for those I do not configure etherchannel on the switches.

This conversation is moving away from my original question but I think it is still relevant enough.. Could my proposed vmkernel to pnic mapping be counter-productive if I'm using etherchannel for iSCSI?

0 Kudos
ThompsG
Virtuoso
Virtuoso

Hi,

Become intimately familar with Etherchannel limits after we installed a single Nexus 5020 at our DR site Smiley Wink This has a hard limit of 16 etherchannels unless you have two of them in which case you can create virtual portchannels and the limit goes away (well gets increased).

Anywho back to the matter at hand; I was going to send you a link to a page that goes into detail about ISCSI but noticed you have already referenced this website in your post. One thing though, I'm not sure where you got the idea from the article to use Etherchannels for the NICs which make up the ISCSI vSwitch? When I read the article and from what I understand about ISCSI it appears the best practice is to not use Etherchannels as you dedicate a specific NIC to a VMkernel portgroup. If you look closely at the screenshot you will also see they are leaving the Load Balancing to virtual Port ID. It would be my feeling that unless you are connecting to a NFS mount then your configuration is counter productive and I would not create a etherchannel out of the 4 NICs. Still do as the article suggests and create multiple paths to your array via multiple NICs which will give you the load balancing and redundacy you want but the ports would not be in an etherchannel.

Thoughts?

Glen

0 Kudos
VMmatty
Virtuoso
Virtuoso

I agree with ThompsG regarding not using Etherchannel for your iSCSI links.  The post that you linked was pretty explicit I thought.  It says "do not use link aggregation for block storage" or something similar.

I think that using Etherchannel for everything else is probably ok.  I do remember reading once that using Etherchannel and IP Hash load balancing wasn't recommended if you had virtualized MSCS/Failover clusters.  Does anyone know/remember if that is true or am I just remembering it incorrectly?

Matt | http://www.thelowercasew.com | @mattliebowitz
0 Kudos
Western0
Contributor
Contributor

Hi VMmatty,

The article recommends not using round robin PSP if you are mapping RDMs to VMs for the purpose of Microsoft clusters. Perhaps that is what you are thinking of.

I have a few environments that are running with etherchannel and multiple VMNics that haven't had any issues but perhaps they aren't performing optimally. I'll be building a new environment soon with HP P4000 nodes and I'll test without etherchannel and post the results.

Thanks for the input!

0 Kudos
VMmatty
Virtuoso
Virtuoso

I'm aware of the restriction on using Round Robin MPIO with MS clusters, but I could have sworn I read somewhere that there was an issue with using Etherchannel/IP hash load balancing as well.  I can't find it for the life of me so I think I'm going to have to admit that I am not remembering it correctly.  Ahh, the joys of getting old....

Matt | http://www.thelowercasew.com | @mattliebowitz
0 Kudos