VMware Cloud Community
ldelossa
Contributor
Contributor
Jump to solution

My options for fault tolerance, nic teaming, and link aggregation

Hey Guys,

We are moving to a new datacenter and we have a chance to reconfigure 4 esxi hosts with efficiency in mind. Right now we aren't nic teaming and do not have redundancy. I came into this environment so I do have the knowledge of how to configure redundancy however I want to know exactly what options I have available to me.

We have 3 switches stacked to form on logical unit. Each host has 6 NIC's. I'm going to dedicate 4 (two groups of team nic's adapters) and wire them to the switch redundantly. I would like to configure these active/active if possible - but what does this entail? I must etherchannel from my cisco switch correct? Do I also need to Nexus 1000 licenses?

1 Solution

Accepted Solutions
dbthree
Enthusiast
Enthusiast
Jump to solution

Idelossa,

Correct. NIC teaming in the vSphere hypervisor does not equal LACP or Etherchannel. By default, the balancing algorithm will balance outgoing traffic from the host based on the policy: usually based on IP hash, though sometimes based on originating port ID, etc.

Correct, in order to do an actual LACP/Etherchannel you will need the Nexus1000v license, and Enterprise plus—so it's rather pricey. I can tell you I have rarely run into situations, however, where that is a technical requirement based on throughput.

More often, there are configuration options to consider first, rather than just throwing money at the problem. 1) Have you considered traffic shaping? If you really want to get more available bandwidth, you can consolidate your network:

ESX vSwitch config

vmnic0,3 - Management/vMotion Network (separate VLANs for each)

vmnic1,2,4,5 - Production Network (separate VLANs for each)

Physical switch config

switch 1: vmnic0,1,2

switch 2: vmnic3,4,5

Or you could even consolidate this and have a dedicated pair of NICs on that host for your DB server. So you have lots of options. Hope that helps.

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com

View solution in original post

0 Kudos
14 Replies
weinstein5
Immortal
Immortal
Jump to solution

Welcome to the Community - yes you will be able to have the NICs in an active/active configuration - the configuration of the physical switch is going to depend on the nic teaming method you plan to use - as an example you will only need to enable etherchannel if you plan to use ROute base on IP hash,

If you plan to use a distributed switch you will need as least the Enterprise Plus level of licensing this includes the Nexus 1000 - and the Nexus 1000 does require an additional license -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
a_p_
Leadership
Leadership
Jump to solution

Hard to believe you don't currently have NIC redundancy configured with 6 NICs!? To give you an advice, please provides some information about

  • the current network configuration (virtual and physical)
  • the features used which require networking (e.g. vMotion, iSCSI, ...)
  • your current licensing

André

0 Kudos
dbthree
Enthusiast
Enthusiast
Jump to solution

@Idelossa,

Six is a great number! Typically, people will do this kind of redundancy with just two switches, so the fact that you have three stacked, you can just discount the third one and utilize it elsewhere. A typical config would be like this:

ESX vSwitch config

vmnic0,3 - Management Network (separate VLAN)

vmnic1,4 - vMotion Network (separate VLAN)

vmnic2,5 - Production Network (VLANs and what have you)

You can run Etherchannel or LACP, but that is not really necessary unless you have high traffic on one link. If you have to choose, I would choose LACP, but that's more preference. You will want to make sure that "notify switches" is turned on in the vSwitch port group settings, so that switches will be notified of state changes in the links.

Physical switch config

switch 1: vmnic0,1,2

switch 2: vmnic3,4,5

This way if you lose a switch or a single vmnic or a vmnic bus (i.e., vmnic0,1 built-in to motherboard) then you will still be live in every fashion.

If you want to use all three switches then I would do it like this:

ESX vSwitch config

vmnic0,1,3 - Management/vMotion Network (separate VLANs for each)

vmnic2,4,5 - Production Network (separate VLANs for each)

Physical switch config

switch 1: vmnic0,2

switch 2: vmnic1,4

switch 3: vmnic2,5

Hope that helps! You certainly don't need the Nexus 1k to be fully redundant!

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com
0 Kudos
ldelossa
Contributor
Contributor
Jump to solution

Dan,

Thanks this is really helpful. I'm just a little confused though about running etherchannel and nic teaming, can we clarify ->

Nic teamning does not always = link aggregation (on the VMware side of things) correct?

In order to use link aggregation on the vmware side = I will need nexus 1000v licensing correct?

Am I able to use active/active nic teaming w/ nexus licenses, if so do I need link aggregation?

I'm worried mostly about the traffic going to and from our database server, we have noticed some congestion and if I could bump up the throughput on those wires I'd love to.

0 Kudos
dbthree
Enthusiast
Enthusiast
Jump to solution

Idelossa,

Correct. NIC teaming in the vSphere hypervisor does not equal LACP or Etherchannel. By default, the balancing algorithm will balance outgoing traffic from the host based on the policy: usually based on IP hash, though sometimes based on originating port ID, etc.

Correct, in order to do an actual LACP/Etherchannel you will need the Nexus1000v license, and Enterprise plus—so it's rather pricey. I can tell you I have rarely run into situations, however, where that is a technical requirement based on throughput.

More often, there are configuration options to consider first, rather than just throwing money at the problem. 1) Have you considered traffic shaping? If you really want to get more available bandwidth, you can consolidate your network:

ESX vSwitch config

vmnic0,3 - Management/vMotion Network (separate VLANs for each)

vmnic1,2,4,5 - Production Network (separate VLANs for each)

Physical switch config

switch 1: vmnic0,1,2

switch 2: vmnic3,4,5

Or you could even consolidate this and have a dedicated pair of NICs on that host for your DB server. So you have lots of options. Hope that helps.

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com
0 Kudos
ldelossa
Contributor
Contributor
Jump to solution

Awesome Dan,

Thanks a lot for that.

Just to confirm however, active/active requires link aggregation correct ? (thus also requires nexus 1000v licenses) No active/active without that correct?

And yes I know there's a ton of options, just trying to handle each option one at a time. Step by step haha.

One other thing, the above consolidation method, you use redundancy on vmotion that way, no?

weinstein5
Immortal
Immortal
Jump to solution

No active/active does not require Nexus 1000v - the native teaming policies will decide which active NIC to use - there are three types:

  1. Route Based on Virtual Port ID - This is the default - Based on the virtual port id assigned to the virtual NIC an uplink port is selected for outbound traffic -
  2. Route based on IP Hash - each packet is examined and based on the source and destination IP address an uplink is selected to send that packet out - this teaming pilicy will require the physical switch the ports connect to will have have to be configured for LACP/Etherchannel. If the VM is communicating to multiple destination this policy will have the highest policy of distributing the taffic across all active uplink ports.
  3. Route base on MAC Address- Based on the MAC Address of the  virtual NIC an uplink port is selected for outbound traffic -
If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
ldelossa
Contributor
Contributor
Jump to solution

Good stuff!

Thanks a ton!

0 Kudos
dbthree
Enthusiast
Enthusiast
Jump to solution

No, most people do active/active on teamed links. If the NIC is listed under "Active adapters" then it is in active/active state. It will route traffic via all NICs according to the policy.

Yes, what I showed in the last example was a typical teamed concept where management and vMotion share two links, since they are less utilized, and prioritized on one physical NIC.

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com
0 Kudos
EKardinal
Enthusiast
Enthusiast
Jump to solution

Dan,

since ESXi 5.1, LACP is also supported by the VDS.

VMware KB: Sample configuration of EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi...

We have some older Dell PowerEdge 2950 with only 2 NICs and using Link Aggregation to a HP ProCurve 2910 Switch.

It works like a charm.

Regards,

Emanuel

0 Kudos
dbthree
Enthusiast
Enthusiast
Jump to solution

EKardinal,

Absolutely. Unfortunately, a lot of users still aren't on 5.1 yet. Frankly, I think it is a feature that should have been incorporated into 4.1, but hey, what do I know?

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com
0 Kudos
ldelossa
Contributor
Contributor
Jump to solution

So once we finish our migration to the new datacenter we will be upgrading to 5.1. So we are able to do link aggregation on 5.1 without the nexus licenses?? This is great news.

0 Kudos
dbthree
Enthusiast
Enthusiast
Jump to solution

Correct.

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com
ldelossa
Contributor
Contributor
Jump to solution

Thanks for all the help guys, I appreciate it.

0 Kudos