i have my esxi hosts ocnnectd to nexus 2000 in a port channel on 2 separate switches.
does this mean both links will be utilized at any time. or will traffic on one vm initially started on say vmnic2 will always stay on vmnic2. Or will traffic going to th vm be actually split between vmnic2 and vmnic6?
The short answer is, if you want a single VM to use more than one single link, then you need to use a link aggregate (port channel). The switches (or switch) need to be configured with a static link aggregate i.e., without LACP, and the ESX host needs to be configured to use the route based on IP hash load balancing.
The slightly longer answer...
As per the IEEE standard, a link aggregate (port-channel) can only exist between two physical devices. While this can be used to add bandwidth, it does of course introduce a single point of failure.
Cisco vPC is a proprietary mechanism that allows a single downstream device to have two connections across two physical switches, but see those two switches as a single entity.
As far as the Nexus switches are concerned the MAC addresses of all the VMs on the host are associated with a single link i.e., the port-channel. The switch then decides, based on the port-channel load balancing algorithm in use, which physical link should be used. If the switches are configured with a load-balancing algorithm that looks at source/destination IP and port numbers (configured with command port-channel load-balance ethernet source-dest-port), a single VM can utilise more than one link of the port-channel. This of course is for traffic flowing from the network to the host.
For traffic from the ESX hosts, a single VM will only use more than one physical link if the ESX load balancing is set to route-based on IP hash. This is what enables the link aggregation on the ESX. Take a look at Part 3 of Ken Cline's Great vSwitch debate for details of the load balancing options available on the ESX host.
Regards
So are you using a VPC between two seperate nexus 5000s? You have 2000 A as a FEX off of 5000 A , and 2000 B as a fex of of 5000B?
Typically the downstream device would need to be capable of negotiating an LACP etherchannel, so it would probably need to support LACP active. The only virtual switchs I'm aware of that will do this are the 1000v, and the vDS starting in 5.1. Otherwise the VPC will never come up and you just have two normal access ports. You can use "show vpc" on the 5k to check.
Traffic flow from the host to the switch will depend on the load distribution algorithm on the virtual host. The 1000v has about 16 options. Traffic coming back down from the 5k over the VPC only one member per 5k will always just take the local member of the VPC, it won't send traffic across the peer link unless the local member link is down.
both 2000s have vpcs to both 5000
cisco does not say anything requiring cisco 1000v to uilize both active port channels on the downstream host. I am on 5.0 dvswitcch have not upgraded to 5.1 yet.
Dual fex homed evpc fex is les common. Not much experience with it out there. Are you running at least nx os 5.1 on a 5500 platform? Thats when support for dual layer evpc was added.
With single home fex modules you require LACP, which means 1000v or 5.1. if you have evpc on the 5500 you can support static port channels. once the static port channel is up evedything will fall to the load distribution algorithm on each end.
yes I am running this
system: version 5.1(3)N2(1b)
so I do not need lacp if i am using evpc?
how can I tell if the traffic is load balanced on both nics on the switch?
thanks
I have lacp enabled on the management dvswitch but I cannot get the port channel to come up. whenever I migrae my management vmk from sandard to dvs, it freezes up and lose connection. any ideas?