VMware Cloud Community
admin
Immortal
Immortal

I Could Really Use Some LACP Clarification...

Hi gang,

Okay I have read through a number of the threads in the forums here about LACP with ESX and to be honest, I am more confused than when I started. Much of the information seems to be conflicting, some say there is a performance increase, some say there is not. Some say you need to use Etherchannel some claim you can only use open standard LACP.

I have 10 stacked cisco 3750's ready to go into production but I would like a finalized answer as to whether or not it will beneficial in our environment to use LACP and hence the trouble to set it up, test and ensure its working as expected . We are stacking for redundancy but wanted to take advantage of LACP for the performance throughput increase.

My questions are as follows:

A) Does LACP split traffic to a destination across as many ports in the group or does it only send one session across one port, the next session if to a different destination IP across a different port and so on? My understanding is that LACP will only split traffic on a port by port basis based on source / destination IP and not like a traditional port aggreation group where traffic to a single destination will be spread across multiple ports as if they were one. Is this correct?

B) I know on the ESX host Load Balancing policy needs to be set to Route based on IP Hash. My real question here is, what is the exact command / format on the cisco 3750 to take advantage of this? Active mode? Static? LACP or Etherchannel?

C) Is it worth it? Do you see an actual performance improvement? We are currently running on a number of 2960g's and have two physical connections per port group, so one nic to each switch for redundancy. Our iSCSI traffic is configured with two seperate port group/s per vswitch in order to use round robin load balancing for multipathing (equallogic group of arrays).

Thanks guys really look forward to receiving clarification on this so I can get these switches into production!

0 Kudos
4 Replies
dilidolo
Enthusiast
Enthusiast

I follow this KB, it's for 3.5 but 4.0 should be the same.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100404...

My environment is still 3.5 and we have lots of problems with 3750 in this config when one switch goes down. I don't know if its our broadcom nic that caused all the issues or 3750 firmware. We have no problem with our 6905 core switch with same setup.

admin
Immortal
Immortal

Thanks dilidolo for the link. Good reference for setting it up.

Really hoping to hear from others whether lacp is worth the time to setup and will complicate troubleshooting efforts down the road or prove unreliable in any way?

0 Kudos
NYSDHCR
Enthusiast
Enthusiast

We are currently at VMware ESX 4.0.0 build-208167 on Dell m1000 blade chasis. Here is our setup:

Switch Name Num Ports Used Ports Configured Ports MTU Uplinks

vSwitch0 32 11 32 1500 vmnic0,vmnic1,vmnic2,vmnic3

PortGroup Name VLAN ID Used Ports Uplinks

vm_network 128 1 vmnic0,vmnic1,vmnic2,vmnic3

Service Console 171 1 vmnic0,vmnic1,vmnic2,vmnic3

VMkernel_NFS 175 1 vmnic0,vmnic1,vmnic2,vmnic3

VMkernel_iSCSI 174 1 vmnic0,vmnic1,vmnic2,vmnic3

VMkernel_FT 173 1 vmnic0,vmnic1,vmnic2,vmnic3

VMkernel_VMotion 172 1 vmnic0,vmnic1,vmnic2,vmnic3

We have 4 pnic Etherchannel with 6 vlans configured. We have experienced no issues whatsoever. We are connected to Cisco 3130 switches. We plan to enable jumb frames (already enable on the switches). we plan to breakout our nfs and iscsi vmkernel ports.

I have proposed this configuration to VMware, and their response was this:

"The currently networking configuration you have is fine. And yes, FT is I/O intensive, if you have more nics, sure, you can separate them to dedicate NICs."

We have a KB regarding configuration etherchannel.

http://kb.vmware.com/kb/1004048

And cisco has a VMware networking best practice white paper.

http://www.cisco.com/web/BE/learn_events/pdfs/Server_Virtualization.pdf

Hope this helps!

admin
Immortal
Immortal

Thanks guys for the articles and links. We ended up testing with and settling on etherchannel with mode on. Seemed to give us the results we were after. Seems to be lots of confusion depending on the articles you read or or forum threads you review. Since some people say lacp only and that etherchannel is not supported and vice versa with some modes working and others not, etc.

Regardless, Etherchannel in mode on seems with the load balancing on the esx servers set to IP hash has given us the results we were after.

Thanks again

0 Kudos