VMware Cloud Community
jhunter11
Contributor
Contributor
Jump to solution

Network Redundancy Question

Hi Guys,

Just a quick question/clarification.

On the ESX hosts in our environment, all of the hosts have 8 physical NICs.

The way we have our networking setup is that the IP Subnet that the Service Console runs on (10.9.8.X) is also the IP Subnet of our internal network.

The way I have my hosts setup is with two of the physical NICs on that network going into the same Virtual Switch.  For the Service Console, the NICs are setup as Active/Standby.  For the internal network they are setup as Active/Active.

Is this the correct/preferred way to set this up to achive not only redundancy for the Service Console but also to have NIC teaming to achieve greater bandwith on our inside network?  Or should I be trying to use some of the other empty NIC ports to complete remove the Service Console from that IP Subnet?

James

Reply
0 Kudos
1 Solution

Accepted Solutions
LarryBlanco2
Expert
Expert
Jump to solution

Having 8 ports is good.

A good way to setup an ESX server is as follows.

3 ports for VM Network and the Service Console,  Put these on the same vSwitch and configure vlan trunking.

2 port for vMotion (preferabbly on a separate vlan or network)

3 ports for IP storage iSCSI/NFS

I would use teaming on each vSwitch ( Static LACP)

I don't know what type of storage you got going on..  The above is a good and redundant setup.

Larry B.

View solution in original post

Reply
0 Kudos
15 Replies
LarryBlanco2
Expert
Expert
Jump to solution

Having 8 ports is good.

A good way to setup an ESX server is as follows.

3 ports for VM Network and the Service Console,  Put these on the same vSwitch and configure vlan trunking.

2 port for vMotion (preferabbly on a separate vlan or network)

3 ports for IP storage iSCSI/NFS

I would use teaming on each vSwitch ( Static LACP)

I don't know what type of storage you got going on..  The above is a good and redundant setup.

Larry B.

Reply
0 Kudos
MauroBonder
VMware Employee
VMware Employee
Jump to solution

this guide maybe helpful

*Please, don't forget the awarding points for "helpful" and/or "correct" answers. *Por favor, não esqueça de atribuir os pontos se a resposta foi útil ou resolveu o problema.* Thank you/Obrigado
Reply
0 Kudos
jhunter11
Contributor
Contributor
Jump to solution

Okay, let me provide my setup a little differently then.

Each host has 8 NICs.

Two NICs are for SC/vMotion/Inside Network

Two NICs are for DMZ

1 NIC for Backups

3 NICs as of yet, undefined.

These hosts are all connected to a NetApp SAN.

Should I look into separating Service Console and vMotion?  We had originally thought of putting vMotion on the backup network.

When you say you would use static LACP on each vSwitch, what do you mean?

When  you say use ports for IP storage iSCSI/NFS, what do you mean?  I don't  think I've ever configured a network or NIC for anythign storage  related?  Perhaps I just don't understand what you mean.

Thanks for the help Larry Smiley Happy

James

Reply
0 Kudos
LarryBlanco2
Expert
Expert
Jump to solution

Attached is a look at the networking layout. You should consider using vLAN instead of dedicating the vSwitches and their nics to only 1 subnet. vLan will allow you to get better usage/performance across your nics. With that said. In order to take advantage of your teamed for each vSwitch, you should set up LACP, ESX will load balance outgoing requests. In order to load balance incoming requests then you need to configure your switches with LACP. Different switch manufacture call it different names.. Cisco = Etherchannel, Dell/ HP= LACP

ESX by itself support Static LACP (Static Etherchannel). This is how I have mine configured.

This is an excellent blog post, which help me out a great deal as well.: http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/ also this one has all the good links: http://blog.scottlowe.org/2011/01/07/looking-back-looking-forward/

I don’t know on your netapp if you are using FC, iSCSI or NFS. If You are using iSCSI or NFS then use the nics for IP Storage, otherwise use the nics for anything else.

vMotion should go on its own links. It should also be within its own subnet as well. This subnet does not need to be routed. So you can just create one, provided that all your switch are interconnected in some fashion.

Larry

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Cisco calls it Etherchannel, HP calls it a trunk. If you see the word "LACP" in your Procurve configuration, you have configured something that cannot talk to vmware.

I urge heavily for people to stop using the phrase LACP with VMWare - it inevitably leads expectations that are traditionally associated with LACP, and not the limitations of Etherchannel.

If you do want to use LACP, purchase the Nexus v1000 which enables this support in vmware.

Edit: Both Cisco and HP use the command "lacp" when you actually do want LACP.

Reply
0 Kudos
Nick_F
Enthusiast
Enthusiast
Jump to solution

There's more than one correct way to do your config, personally though I would:

2 NICs for SC/VMotion (one active & one standby on other NIC and vice-versa), have these on a different subnet to your main network and use ACLs etc. (IMO you need to protect your SC and not have it open on the main network)

2-4 NICs for your VM networks (any one subnet should be configured on at least two of the NICs)

2-4 NICs for your storage (if using iSCSI)

any NICs left over I would leave unconfigured in case you want to set-up VMFT in future as you'd want to dedicate a NIC or two to that

Reply
0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

This depends entirely on your environment,

If you are using iScsci / NFS storage, then you'll want to dedicate some NICs to carrying disk data back and forth (storage)

If you have applications with High Netwrok throughput, but low disk read / writes, dedicate more NIcs to VM Newtroks.

If you are not expecting to overload hosts and it is unlikely that you vMotion much . . no point dedicating 3 NICs to this.

Do not be afraid to leave NICs unused for future expansion, as you see where all your load is.

Ensure that all NICs are 'teamed' (So each VMSwitch has at least 2 Physical NICs)

Lastly, consider ESXi - no Service Console required . . and it is the way forward . . .

One day I will virtualise myself . . .
Reply
0 Kudos
jhunter11
Contributor
Contributor
Jump to solution

Yeah, I'm currently working on a few different things to get our networking as it should be.  I'm trying to switch from a VSS to a VDS and then have my network guy do all the nic sharing on the physical nics for the input so that the input is done from the physical switch and the output is done from ESXi (I read a blog post that this was the way to go, but if I've been misinformed, please let me know).

When you say that there is no Service Console required on ESXi, what do you mean?  We are currently using ESXi and I have SCs in the environment, is this bad?  Or wrong?  Please advise.  Thanks.

Reply
0 Kudos
LarryBlanco2
Expert
Expert
Jump to solution

Your right,  wrong choice of term. 

Static Link Aggregation is the term and is what you get with VSS and the VDS.  With the Nexus 1Kv you can use LACP as it is a feature and fully supported.

Reply
0 Kudos
LarryBlanco2
Expert
Expert
Jump to solution

Well, you need a management network.  You might have called this "Service Console" but it reality it's a management network in which you have a VMKernel  Port assigned to in a vSwitch.

The VMkernel port handles traffic for host mgmt. (Like the service console on traditional ESX), vMotion, iSCSI, & NFS.

You will likely have 2+ vSs or vDs with a VMKernel port assigned to it.

One for your primary host mgmt traffic, Another one for vMotion Traffic, another one maybe for iSCSI, NFS traffic, and one more as a backup Host Mgmt.

Each VMKernel will have it's own IP address.  Now these can be in different VLans or subnets.  Only 1 gateway is allowed on an ESX(i) box.  This will likely be the gateway assigned to your primary Host Mgmt. VMKernel.

Hope that helps.

Larry B.

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

jhunter1 wrote:

When you say that there is no Service Console required on ESXi, what do you mean?  We are currently using ESXi and I have SCs in the environment, is this bad?  Or wrong?  Please advise.  Thanks.

Hi,

I'm afraid this is incorrect.

The concept of a Service Console does not exist under ESXi.

Reply
0 Kudos
LarryBlanco2
Expert
Expert
Jump to solution

Yes, Service Console does not exists in ESXi.  There is no Service Console.  It's a Linux base VM that resides on an ESX server.  You give it a IP addr and a gateway and use it to manage the ESX server.   You also have the VMKernel is ESX.  That is used for vMotion, iSCSI, and NFS.  It also requires a IP addr and a gateway.

In ESXi you get Busybox which is a very small process that uses the linux kernel.  Mainly used for embedded devices.  This did away with the service console.  Since the service console had the management interface.  They combined it into the VMKernel.  Therefore in ESXi, the VMKernel now does Host Management, iSCSI, NFS, and VMotion.

You may have a vSwitch used as management network named 'Service Console'. which is fine.

If you post a pic of your network configuration you can be better assisted.

Larry B.

Reply
0 Kudos
jhunter11
Contributor
Contributor
Jump to solution

So, after a long discussion with my network admin I am thinking about instituting vlans into our VMware environment.  He knows how he wants to split the trunks for all of the VM Networks, but the question he has for me is based on all I've read about VMware, we should split up the Service Console (or management network) and the vMotion network from eachother.  He wants to know is it enough to have them using different vlans?  Or should they be using separate physical wires? 

Reply
0 Kudos
Nick_F
Enthusiast
Enthusiast
Jump to solution

There's no right or wrong answer to this, we always just use VLANs with SC and VMotion both trunked over 2 physical uplinks. If you dedicate a physical uplink for each then you'll need 4 uplinks just to cover SC & Vmotion in order to have redundancy. My opinion is that's unnecessary, even if you have plenty of physical uplinks available then I'd still rather use them for VM networks or reserve for future requirements (such as Fault Tolerance).

Reply
0 Kudos
LarryBlanco2
Expert
Expert
Jump to solution

Yes, I agree with Nick_F.  We have the same exact config.  2 trunked nics for the Mgmt Network and vMotion.  U can have them on separate vlans if u wish.  I do not.  I have them on the same vlan.

If you do not want to trunk them, then u can also just alternate standby.   For instance, on the same vSwitch, take one nic (failover order) and make it active for the mgmt. network, and put the other on stanby.  on the vMotion  port, make the standby nic from the mgmt network active and put the other on standy.  This will basically gve you failover on the two port groups and one covers the other.

Larry B.