Hey Guys,
We are moving to a new datacenter and we have a chance to reconfigure 4 esxi hosts with efficiency in mind. Right now we aren't nic teaming and do not have redundancy. I came into this environment so I do have the knowledge of how to configure redundancy however I want to know exactly what options I have available to me.
We have 3 switches stacked to form on logical unit. Each host has 6 NIC's. I'm going to dedicate 4 (two groups of team nic's adapters) and wire them to the switch redundantly. I would like to configure these active/active if possible - but what does this entail? I must etherchannel from my cisco switch correct? Do I also need to Nexus 1000 licenses?
Idelossa,
Correct. NIC teaming in the vSphere hypervisor does not equal LACP or Etherchannel. By default, the balancing algorithm will balance outgoing traffic from the host based on the policy: usually based on IP hash, though sometimes based on originating port ID, etc.
Correct, in order to do an actual LACP/Etherchannel you will need the Nexus1000v license, and Enterprise plus—so it's rather pricey. I can tell you I have rarely run into situations, however, where that is a technical requirement based on throughput.
More often, there are configuration options to consider first, rather than just throwing money at the problem. 1) Have you considered traffic shaping? If you really want to get more available bandwidth, you can consolidate your network:
ESX vSwitch config
vmnic0,3 - Management/vMotion Network (separate VLANs for each)
vmnic1,2,4,5 - Production Network (separate VLANs for each)
Physical switch config
switch 1: vmnic0,1,2
switch 2: vmnic3,4,5
Or you could even consolidate this and have a dedicated pair of NICs on that host for your DB server. So you have lots of options. Hope that helps.
Welcome to the Community - yes you will be able to have the NICs in an active/active configuration - the configuration of the physical switch is going to depend on the nic teaming method you plan to use - as an example you will only need to enable etherchannel if you plan to use ROute base on IP hash,
If you plan to use a distributed switch you will need as least the Enterprise Plus level of licensing this includes the Nexus 1000 - and the Nexus 1000 does require an additional license -
Hard to believe you don't currently have NIC redundancy configured with 6 NICs!? To give you an advice, please provides some information about
André
@Idelossa,
Six is a great number! Typically, people will do this kind of redundancy with just two switches, so the fact that you have three stacked, you can just discount the third one and utilize it elsewhere. A typical config would be like this:
ESX vSwitch config
vmnic0,3 - Management Network (separate VLAN)
vmnic1,4 - vMotion Network (separate VLAN)
vmnic2,5 - Production Network (VLANs and what have you)
You can run Etherchannel or LACP, but that is not really necessary unless you have high traffic on one link. If you have to choose, I would choose LACP, but that's more preference. You will want to make sure that "notify switches" is turned on in the vSwitch port group settings, so that switches will be notified of state changes in the links.
Physical switch config
switch 1: vmnic0,1,2
switch 2: vmnic3,4,5
This way if you lose a switch or a single vmnic or a vmnic bus (i.e., vmnic0,1 built-in to motherboard) then you will still be live in every fashion.
If you want to use all three switches then I would do it like this:
ESX vSwitch config
vmnic0,1,3 - Management/vMotion Network (separate VLANs for each)
vmnic2,4,5 - Production Network (separate VLANs for each)
Physical switch config
switch 1: vmnic0,2
switch 2: vmnic1,4
switch 3: vmnic2,5
Hope that helps! You certainly don't need the Nexus 1k to be fully redundant!
Dan,
Thanks this is really helpful. I'm just a little confused though about running etherchannel and nic teaming, can we clarify ->
Nic teamning does not always = link aggregation (on the VMware side of things) correct?
In order to use link aggregation on the vmware side = I will need nexus 1000v licensing correct?
Am I able to use active/active nic teaming w/ nexus licenses, if so do I need link aggregation?
I'm worried mostly about the traffic going to and from our database server, we have noticed some congestion and if I could bump up the throughput on those wires I'd love to.
Idelossa,
Correct. NIC teaming in the vSphere hypervisor does not equal LACP or Etherchannel. By default, the balancing algorithm will balance outgoing traffic from the host based on the policy: usually based on IP hash, though sometimes based on originating port ID, etc.
Correct, in order to do an actual LACP/Etherchannel you will need the Nexus1000v license, and Enterprise plus—so it's rather pricey. I can tell you I have rarely run into situations, however, where that is a technical requirement based on throughput.
More often, there are configuration options to consider first, rather than just throwing money at the problem. 1) Have you considered traffic shaping? If you really want to get more available bandwidth, you can consolidate your network:
ESX vSwitch config
vmnic0,3 - Management/vMotion Network (separate VLANs for each)
vmnic1,2,4,5 - Production Network (separate VLANs for each)
Physical switch config
switch 1: vmnic0,1,2
switch 2: vmnic3,4,5
Or you could even consolidate this and have a dedicated pair of NICs on that host for your DB server. So you have lots of options. Hope that helps.
Awesome Dan,
Thanks a lot for that.
Just to confirm however, active/active requires link aggregation correct ? (thus also requires nexus 1000v licenses) No active/active without that correct?
And yes I know there's a ton of options, just trying to handle each option one at a time. Step by step haha.
One other thing, the above consolidation method, you use redundancy on vmotion that way, no?
No active/active does not require Nexus 1000v - the native teaming policies will decide which active NIC to use - there are three types:
Good stuff!
Thanks a ton!
No, most people do active/active on teamed links. If the NIC is listed under "Active adapters" then it is in active/active state. It will route traffic via all NICs according to the policy.
Yes, what I showed in the last example was a typical teamed concept where management and vMotion share two links, since they are less utilized, and prioritized on one physical NIC.
Dan,
since ESXi 5.1, LACP is also supported by the VDS.
We have some older Dell PowerEdge 2950 with only 2 NICs and using Link Aggregation to a HP ProCurve 2910 Switch.
It works like a charm.
Regards,
Emanuel
EKardinal,
Absolutely. Unfortunately, a lot of users still aren't on 5.1 yet. Frankly, I think it is a feature that should have been incorporated into 4.1, but hey, what do I know?
So once we finish our migration to the new datacenter we will be upgrading to 5.1. So we are able to do link aggregation on 5.1 without the nexus licenses?? This is great news.
Correct.
Thanks for all the help guys, I appreciate it.