Hi All
I have the below configuration:
x2 D-Link DGS-3100 Series Switches - In a Stack - LACP has been enabled on Ports 21-24
I have two Esxi 5 servers both have been configured for Link Aggregation as per:
I am really struggling to get link aggregation to work using these D-Link switches. As soon as I plug the Esxi Servers into the aggregated ports the servers drop off the network, plug them back into a non-aggregated port and they work fine again.
I have then since read this: http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf which actually seems to say that LACP is not supported.
Is anyone able to help or clarify any of this for me please? Any help would be greatly appreciated.
Thanks
David
Yes, vSphere does not support LACP.
Some detail you can found in blog post VMware vSwitch does not support LACP
Hi..
As you've read in the article ESX/ESXi does not support LACP (or any other dynamic trunking protocol for that matter. Unless you have a Cisco N1000v distributed vSwitch installed on top)...
You need to create a static trunk on the DLINK. (I believe this is the default setting. At least according to the DLINK manual here: http://files.dlink.com.au/Products/DGS-3100-24P/Manuals/DGS-3100_series_A1_User_Manual_v2.20.pdf )
/Rubeck
Thank you both.
I have now created the static trunk on my D-Link switches. However, now my VMWare servers and there VMs will not communicate to each other.......Any suggestions please?
Thanks
David
Change on vSwitch settings Load Balancing => Route based on ip hash.
Your LAG port on the DLINks is configured as a member of all required VLANs?
/Rubeck
Hi
I only have the one DEFAULT VLAN and all ports are the member of it.
Thanks
Hi VTsukanov
I have already configuerd this on both of the Esxi servers, are there any other options I should be aware of.
Many Thanks
David
I don't see in DLink manual (http://files.dlink.com.au/Products/DGS-3100-24P/Manuals/DGS-3100_series_A1_User_Manual_v2.20.pdf) any references about mode configuration for LAG group.
Hi
I now have one server plugged into the aggregated ports, and the bandwidth is less than when plugged into a normal ports.
Aggregated Ports: 11MB /sec
Non - Aggregated Ports: 101MB /sec
Confused I am........
All be it I am not actually sure where the problem is.....and I would have thought the switch would have been faster than 100 MB /sec baring in mind it is a 1gig switch.
Thanks
David
We (on the other models of DLink and some time ago) discovered that we have the best result by leaving the teaming setting at default and don´t set LACP on the switch.
Hi
Yes, this is currently how we have the switches.
We did find one issue and that was we had configured a single LAG for multiple servers, we have now configured another LAG and plugged in both servers. The servers communicate fine, but the VMs do not.......
I have raised a ticket with D-Link as well.
Many Thanks
David
This has now been resolved, through some further "tweaking" of the D-Link switches.
Thanks for all the help
If you like to share the solution it could be valuable to other people with the same setup as you. What tweaking did you on your D-link switches?
Hello sharedbandeddav,
is seems i have a similiar issue with my d-link DGS-1100-16 switch
When i connect my storagedevice (iomega or qnap) to my trunked-ports, only 1 of the 2 esx-servers is able to connect to the storage.
When i turn out 1 cable out the storage, the storage-device is available for both hosts.
On my DGS-1100-16 i have possiblities to make trunks, but i don`t know this are static or dynamic trunks. Do you know?
How did you fix this issue on your switch and what are the "other" possibilities to modify the settings for this d-link switches?
Anybody who knows this switch is compatible with esxi with Link aggregation?
BTW:
i am using vsphere 5.1.
There are some changes with passive/active LACP in this version? Maybe more possibilities use dynamic trunks?
Like this : http://rickardnobel.se/lacp-and-esxi-5-1/
Thx!
Hello
If this can help someone.
I've been struggled with the same problem : make a link aggregation from ESXi5.0 to DLink DGS-3100 switches... and managed to achieve it.
First of all, my dlink switches were at Firmaware revision 3.60.28.
On dlink swithes,
I changed from D-link switch to HP.
Still looking for a "best practics" for maximum performance with iscsi.
Until now NFS is faster the ISCSI.
I see many forum but no best practice with maximum thoughput to the NAS with the right configuration.
Thx
to post № 8. Apr 26, 2012 4:25 AM in response to: SharedbandDav…
Try to move vmnic1 (which has presented IP range) to top of failover order - this may help you