VMware Cloud Community
digitalnomad
Enthusiast
Enthusiast

5.5 vDS with Cisco Hardware in an LACP and VPC configuration

Hello All,

I'm a little confused with the new enhanced features of the 5.5 vDS and trying to determine the best implementation for the front end of our VMWare infrastructure...any guidance would be appreciated because I can't seem to determine whether advanced configuration is necessary.

Here's the hardware and network setup I'm working with:

Each Host  has 6x 10Ge Nics, 2 for host operations (MGMT, Vmotion) and 4 for Data (External VLANs). Network connections are trunked 2 vLans for Host OPs and 17+ for Data. Physical Links are split between Aside\Bside Fexs 2232 to 5000s.. The Data Ports are bound as LACP pairs then VPC'd  for 40 GB of available Data and 20 GB for console.

I've seen some of Chris Wahl's videos, his Networking book and articles but am trying to determine if I need to switch to enhanced mode and go through LAG configuration? Is this a requirement? The other vDSs in the environment are either in basic mode probably upgraded by my predacessor from 4.x or a 1000v that needs to be phased out because of lack of Network team support.

Thanks in Advance...DGN

Tags (3)
Reply
0 Kudos
3 Replies
Nick_Andreev
Expert
Expert

If you're not sure that you need vDS LAGs, you most likely don't need them.

One particular benefit you get from using link aggregation in Cisco Nexus environment is that vPC LAGs guarantee that VMs running on different hosts will always communicate through one switch. Instead of traversing the redundant switch in the pair. Which may happen depending on the traffic flow. LAGs also give you somewhat better convergence if a network link was to fail.

If you're pushing large amount of traffic between your VMs and suffer from host spots in your network, you may benefit form that. Otherwise, you're just introducing complexity in your network design.

---
If you found my answers helpful please consider marking them as helpful or correct.
VCIX-DCV, VCIX-NV, VCAP-CMA | vExpert '16, '17, '18
Blog: http://niktips.wordpress.com | Twitter: @nick_andreev_au
digitalnomad
Enthusiast
Enthusiast

Being Adventurous and having the project pushed back a little, I decided to venture down the path of the "enhanced configuration under 5.5 as well as creating a LAG. I created 2 native 5.5 native enhanced VDSs, one for internal use with 2 10Ge (1aside\1bside)and the other for external use with 4x 10Ge (2aside\2bside)

Using KB2051826, I tried creating a 4 port LAG which was easy enough however we could not get the vPCs up no matter what Load Balancing settings we used. Even resorting to no LAG configuration with mimic'd settings to parallel configuration using "Route based on IP hash". Neither the channels nor the VPC would come up. I ended up calling VMWare Support with my network tech with the initial belief that we may have needed an a ASide LAG of 2 and a BSide LAG of 2. The first tech had no understanding of the constructs and after 2 hours on the phone with a Network Escalation Tech who ran in circles...we abandoned the effort.

In the end, I gutted the Native 5.5 vds's from the hosts then recreated the vDSs as 5.1 vDSs. I upgrade to 5.5 but did not choose enhanced mode and left it in basic. I changed the LACP settings of the Port Groups to "Route based on IP Hash" then migrated my physical NICs over to the uplinks. A celebratory reboot and all came up well. We're perplexed. Could this be some weird firmware functionality bug ?

Regards DGN

Reply
0 Kudos
Nick_Andreev
Expert
Expert

Load-balancing algorithm does not play a role in building a LAG or MLAG between the hosts and switches. You can choose whichever algorithm you want and the LAG should still be established.

I didn't fully grasp what the solution was, though. You upgraded from 5.1 to 5.5 and unselected "Enhance the LACP support"?

---
If you found my answers helpful please consider marking them as helpful or correct.
VCIX-DCV, VCIX-NV, VCAP-CMA | vExpert '16, '17, '18
Blog: http://niktips.wordpress.com | Twitter: @nick_andreev_au
Reply
0 Kudos