VMware Cloud Community
fr34k7
Contributor
Contributor

Physical Switch Configuration for vDS

Hello Folks,

i need your input, i'm new to vDS and at work i've currently doing a migration from vSS to vDS.

Everything is working fine for the new infrastructure (Dell Poweredge R630 connected to Aruba 5406zl switches via 2 Port 10GBase-T Trunks (LACP))

vDS has 2 LAG Uplinks

So i wanted to migrate the old Cluster also to the vDS

Old setup was pretty simple:

HP C3000 Blade Enclosure populated with with 4 passthrough-modules (1 GBit) and 8 BL460c G7 Blades

Each Blade has minimum 4 nics (2 used for LAN and 2 used for iSCSI traffic on different VSS) each nic (for iscsi or lan) is connected to a different switch to provide redundancy.

i did not change anything on the physical switches just added the hosts to the existing vDS and assigned the two vmnics as uplink for LAG1/0 and LAG1/1 (did not create trunks on the physical switches as both LAN uplink ports are not connected to the same switch).

unfortunatlely this does not work as expected as it seems there is some kind of loop now and i face high packetloss and latency - so as temporary fix i removed one of the vmnics assigned to the uplink

How is the correct configuration on my vDS and  the physical switch?

Will i have to create a new vDS with normal uplinks and use this one instead of the existing vDS with LAG uplinks? or can i fix this behaviour by changing the configuration of my physical switches?

Thanks for your answers Smiley Happy

Tags (2)
Reply
0 Kudos
8 Replies
pwilk
Hot Shot
Hot Shot

Do you have MLAG enabled on your switches? If not, you shouldn't use LAG in this configuration as LACP protocol is not suited for use in situation where adapters are connected to different switches.

Cheers, Paul Wilk
hussainbte
Expert
Expert

put all the active nics in use .. no LAG required..

Use Route Based on Physical NIC Load as the load balancing policy..

this is very simple and equally good..

check the below link for more explanation

https://virtualizationreview.com/articles/2015/03/26/load-balancing-vsphere-vswitch-uplinks.aspx

If you found my answers useful please consider marking them as Correct OR Helpful Regards, Hussain https://virtualcubes.wordpress.com/
daphnissov
Immortal
Immortal

As recommended to you by hussainbte​, I would also strongly cast a vote for ​not​ using any form of LAG if you have an entitlement to use a vDS. LBT is a far better option and requires no physical upstream configuration aside from regularly-provisioned ports and it has even better benefits than LACP or others.

Reply
0 Kudos
fr34k7
Contributor
Contributor

Ok, so my assumption that i have to create an additional vDS with normal uplinks instead of LAG uplinks for those servers was correct.

I'll create a copy of the existing vDS, change the uplink type and migrate a few hosts some time next week.

Hopefully then everything is working as intended.

Thanks a lot for your answers!

Reply
0 Kudos
fr34k7
Contributor
Contributor

Configured LAG on the vDS because all uplinks on the newer servers are connected to the same switch

Reply
0 Kudos
daphnissov
Immortal
Immortal

That's fine, but it's not required, even if they all are connected to the same switch it doesn't require a LAG.

Reply
0 Kudos
fr34k7
Contributor
Contributor

Ok i thought so because without LACP Uplinks i had troubles with packetloss and high latency which disappeared after i set it to LACP and changed the uplink type

Reply
0 Kudos
Zifu_invzion
Enthusiast
Enthusiast

Hi,

As mentioned, you have to change your Teaming and failover policy. Its happen that when you have LACP or Etherchannel in your switchs and wrong failover policy is configured the packets don't know where to go so you have connectivity loss

Configure the Teaming and failover policy.to Route Based on IP Hash:

Route Based on IP Hash

BR!

Reply
0 Kudos