VMware Cloud Community
devokris
Contributor
Contributor

NIC teaming - redundant switches

Hello,

Is it possible on a ESX host to aggregate 2 links coming from 2 separate (Cisco) switches ?

It would be done in order to fully utilize the capacity of the links and to respond to a NIC or switch failure.

Is the Cisco Nexus virtual switch needed? What does its dynamic LCAP functionality brings?

Thanks.

0 Kudos
5 Replies
AntonVZhbankov
Immortal
Immortal

Yes, you just add NICs to vSwitch and set usage policy as active/active.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
0 Kudos
jbruelasdgo
Virtuoso
Virtuoso

yes, you can.

consider to take a look to this VMware PDF:

http://www.vmware.com/files/pdf/virtual_networking_concepts.pdf

regards

Jose B Ruelas

http://aservir.wordpress.com/

Jose B Ruelas http://aservir.wordpress.com
0 Kudos
patrickds
Expert
Expert

Without the nexus 1000v you cannot have aggregation, links will be limited to single cable speed, even with nic teaming.

With it you can, and wether or not you can do it across physical switches, depends on the physical switch.

They'll have to be stacked for starters, and then support cross-stack link aggregation.

You'll probably have to go way up in the pricerange to do this, but it is possible (or at least should be according to the nexus documentation)

devokris
Contributor
Contributor

So, from what I understand:

vSphere: Without the virtual Nexus switch, I can do NIC teaming i.e. 802.3ad, but the total bandwith cannot exceed a single link bandwidth.

Do you all agree? Where does this limitation comes from?

Thanks.

0 Kudos
patrickds
Expert
Expert

vSphere: Without the virtual Nexus switch, I can do NIC teaming i.e. 802.3ad, but the total bandwith cannot exceed a single link bandwidth.

The total bandwidth between the ESX and the physical switch(es) can exceed a single link bandwidth, but not the bandwidth from any single connection (like the Service Console, the VMkernel or a Virtual Machine)

Traffic from different vswitch ports will be loadbalanced between the physical uplinks, but one vswitch port will always be linked to one physical port.

Do you all agree? Where does this limitation comes from?

From the fact that the default virtual switch does not support LACP i guess.

0 Kudos