VMware Cloud Community
SharedbandDavid
Contributor
Contributor

Esxi 5, LACP and D-Link Switches

Hi All

I have the below configuration:

x2 D-Link DGS-3100 Series Switches - In a Stack - LACP has been enabled on Ports 21-24

I have two Esxi 5 servers both have been configured for Link Aggregation as per: 

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100404...

I am really struggling to get link aggregation to work using these D-Link switches. As soon as I plug the Esxi Servers into the aggregated ports the servers drop off the network, plug them back into a non-aggregated port and they work fine again.

I have then since read this: http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf which actually seems to say that LACP is not supported.

Is anyone able to help or clarify any of this for me please? Any help would be greatly appreciated.

Thanks

David

Tags (4)
0 Kudos
19 Replies
VTsukanov
Virtuoso
Virtuoso

Yes, vSphere does not support LACP.

Some detail you can found in blog post VMware vSwitch does not support LACP

0 Kudos
Rubeck
Virtuoso
Virtuoso

Hi..

As you've read in the article ESX/ESXi does not support LACP (or any other dynamic trunking protocol for that matter. Unless you have a Cisco N1000v distributed vSwitch installed on top)...

You need to create a static trunk on the DLINK. (I believe this is the default setting. At least according to the DLINK manual here: http://files.dlink.com.au/Products/DGS-3100-24P/Manuals/DGS-3100_series_A1_User_Manual_v2.20.pdf )

/Rubeck

0 Kudos
SharedbandDavid
Contributor
Contributor

Thank you both.

I have now created the static trunk on my D-Link switches. However, now my VMWare servers and there VMs will not communicate to each other.......Any suggestions please?

Thanks

David

0 Kudos
VTsukanov
Virtuoso
Virtuoso

Change on vSwitch settings Load Balancing => Route based on ip hash.

0 Kudos
Rubeck
Virtuoso
Virtuoso

Your LAG port on the DLINks is configured as a member of all required VLANs?

/Rubeck

0 Kudos
SharedbandDavid
Contributor
Contributor

Hi

I only have the one DEFAULT VLAN and all ports are the member of it.

Thanks

0 Kudos
SharedbandDavid
Contributor
Contributor

Hi VTsukanov

I have already configuerd this on both of the Esxi servers, are there any other options I should be aware of.

Many Thanks

David

0 Kudos
SharedbandDavid
Contributor
Contributor

Hi

As soon as I plug the servers into the aggreagated ports, communication starts to break down.

I have enclosed the screenshot of the settings for Esxi, these settings have only been made on the Vswitch and nothing else, is this correct?

Thanks

David

0 Kudos
VTsukanov
Virtuoso
Virtuoso

I don't see in DLink manual (http://files.dlink.com.au/Products/DGS-3100-24P/Manuals/DGS-3100_series_A1_User_Manual_v2.20.pdf) any references about mode configuration  for LAG group.

Why you think that your LAG group in static LACP mode?
0 Kudos
SharedbandDavid
Contributor
Contributor

Hi

Please find attached screenshots, we shows that the Trunk is static as apposed to using LACP.

Thanks

David

0 Kudos
SharedbandDavid
Contributor
Contributor

Hi

I now have one server plugged into the aggregated ports, and the bandwidth is less than when plugged into a normal ports.

Aggregated Ports:  11MB /sec

Non - Aggregated Ports:  101MB /sec

Confused I am........

All be it I am not actually sure where the problem is.....and I would have thought the switch would have been faster than 100 MB /sec baring in mind it is a 1gig switch.

Thanks

David

0 Kudos
VTsukanov
Virtuoso
Virtuoso

We (on the other models of DLink and some time ago)  discovered  that we have the best result by leaving the teaming setting at default and don´t set LACP on the switch.

0 Kudos
SharedbandDavid
Contributor
Contributor

Hi

Yes, this is currently how we have the switches.

We did find one issue and that was we had configured a single LAG for multiple servers, we have now configured another LAG and plugged in both servers. The servers communicate fine, but the VMs do not.......

I have raised a ticket with D-Link as well.

Many Thanks

David  

0 Kudos
SharedbandDavid
Contributor
Contributor

This has now been resolved, through some further "tweaking" of the D-Link switches.

Thanks for all the help

0 Kudos
rickardnobel
Champion
Champion

If you like to share the solution it could be valuable to other people with the same setup as you. What tweaking did you on your D-link switches?

My VMware blog: www.rickardnobel.se
0 Kudos
mauser_
Enthusiast
Enthusiast

Hello sharedbandeddav,

is seems i have a similiar issue with my d-link DGS-1100-16 switch

When i connect my storagedevice (iomega or qnap) to my trunked-ports, only 1 of the 2 esx-servers is able to connect to the storage.

When i turn out 1 cable out the storage, the storage-device is available for both hosts.

On my DGS-1100-16 i have possiblities to make trunks, but i don`t know this are static or dynamic trunks. Do you know?

How did you fix this issue on your switch and what are the "other" possibilities to modify the settings for this d-link switches?

Anybody who knows this switch is compatible with esxi with Link aggregation?

BTW:

i am using vsphere 5.1.

There are some changes with passive/active LACP in this version? Maybe more possibilities use dynamic trunks?

Like this : http://rickardnobel.se/lacp-and-esxi-5-1/

Thx!

0 Kudos
vdoyelle
Contributor
Contributor

Hello

If this can help someone.

I've been struggled with the same problem : make a link aggregation from ESXi5.0 to DLink DGS-3100 switches... and managed to achieve it.

First of all, my dlink switches were at Firmaware revision 3.60.28.

On dlink swithes,

  • create link_aggregation group_id 3
  • config link_aggregation group_id 3 ports 3:(19-20) (for example).
  • config link_aggregation algorithm ip_source_dest
  • enable stp
  • config stp version rstp
  • config stp ports 3:(1-23),ch3 edge true p2p false
  • config stp ports ch2 p2p true

So to summarize, one must enable stp in order to make esxi communicate with DGS-3100 by a link_aggregation.
0 Kudos
mauser_
Enthusiast
Enthusiast

I changed from D-link switch to HP.

Still looking for a "best practics" for maximum performance with iscsi.

Until now NFS is faster the ISCSI.

I see many forum but no best practice with maximum thoughput to the NAS with the right configuration.

Thx

0 Kudos
16RUS
Contributor
Contributor

to post № 8. Apr 26, 2012 4:25 AM in response to: SharedbandDav…

Try to move vmnic1 (which has presented IP range) to top of failover order - this may help you

0 Kudos