VMware Cloud Community
Dave_McD
Contributor
Contributor
Jump to solution

Best Practice for network redundancy in vSphere 4

I am running a production cluster with 3 3850 m2's running at the moment around 70 VMs,with more to come in the future.

I have 6 NICS in each to use for networking.

I am unsure which is the best process for allocating the NICS.

Do I need redundancy for the VMKernel?

Do I team NICs for the Service Console?

I currently have the following:

vswif0 Service Console vmnic0

vswif1 VM Network vmnic3, 4, 5

vswif2 VMotion vmnic1,2 (separate subnet)

Do I need 3 NICS for the VM's or should I team the Service Console?

Would it be better to put a second Service Console on the VM Network switch or do I put it on the VMotion switch?

My prefereence is for the VMotion switch but I am willing to hear recommendations from those more learned than me!

0 Kudos
1 Solution

Accepted Solutions
jcwuerfl
Hot Shot
Hot Shot
Jump to solution

I have 8 currently

2-> SC (1 active, 1 standy on seperate physical switches)

3 -> VMotion, FT, Client iSCSI PG (Etherchannel) MTU=9000

3 -> VM PG VLans (Etherchannel)

prev. to this I had 4 which I did:

2-> SC, VMotion (2 active), VST (vSwitch VLan Tagging)

2 -> VM Vlans ( 2 active), VST (vSwitch VLan Tagging)

Saw some issues with STP so seperated SC back out.

For 6 like you are doing that sounds like a pretty good setup.

2) SC, VMotion (active for each, standby for each)

4) VM Networks

Remember that just having them Active only effects outgoing traffic not incoming unless you have all of those ports in an Etherchannel config. There is an example in the kB for that. But really depends on how much traffic all of the VM networks have if they don't do that much network traffic you could get by with less ports there.

The SC you definatally want some kind of standby network uplink port as with ESX the SC ports with HA do check for network isolation through those ports so if that one is down that could cause isolation issues. It doesn't sound like you will be doing any FT? so that's something else to think about with the setup.

View solution in original post

0 Kudos
8 Replies
mittim12
Immortal
Immortal
Jump to solution

I typically run 6 NICs in our servers and I slice them up into three vSwitches. The first vSwitch is running Service Console/VMotion with two NICs assigned. I have one NIC active for SC with the other one being standby and vice versa for VMotion. The second vSwitch contains the VM Network and two NICs and the last vSwitch contains the storage network which has two NICs.






If you found this or any other post helpful please consider the use of the Helpful/Correct buttons to award points

woodwarp
Enthusiast
Enthusiast
Jump to solution

I agree with mittim12 but depends whether you have iSCSI or NFS, if you don't and only have FC storage, you can use the other pnics for either separate vswitch for FT or add them as uplinks for the VM port group vswitch.

Dave_McD
Contributor
Contributor
Jump to solution

We use FC for our storage, so we don't need to worry about that.

You are both recommending pretty much what I was thinking about, so thanks for your answers!

I am going to go with the following:

vswif0

Service Console nic0 active, nic 1 standby

VMotion nic1 active, nic 0 standby

vswif1

VM Network nic 2,3,4,5 active

0 Kudos
Dave_McD
Contributor
Contributor
Jump to solution

Both replies answered my question but I could only mark 1 correct, so I marked both as helpful!

0 Kudos
jcwuerfl
Hot Shot
Hot Shot
Jump to solution

I have 8 currently

2-> SC (1 active, 1 standy on seperate physical switches)

3 -> VMotion, FT, Client iSCSI PG (Etherchannel) MTU=9000

3 -> VM PG VLans (Etherchannel)

prev. to this I had 4 which I did:

2-> SC, VMotion (2 active), VST (vSwitch VLan Tagging)

2 -> VM Vlans ( 2 active), VST (vSwitch VLan Tagging)

Saw some issues with STP so seperated SC back out.

For 6 like you are doing that sounds like a pretty good setup.

2) SC, VMotion (active for each, standby for each)

4) VM Networks

Remember that just having them Active only effects outgoing traffic not incoming unless you have all of those ports in an Etherchannel config. There is an example in the kB for that. But really depends on how much traffic all of the VM networks have if they don't do that much network traffic you could get by with less ports there.

The SC you definatally want some kind of standby network uplink port as with ESX the SC ports with HA do check for network isolation through those ports so if that one is down that could cause isolation issues. It doesn't sound like you will be doing any FT? so that's something else to think about with the setup.

0 Kudos
Dave_McD
Contributor
Contributor
Jump to solution

We are not doing FT at this stage due to the limited storage we currently have. In regards to the uplinks, do you think I should have 1 of the VM Network ports as a standby?

0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Some of the configurations I see in places really are a rediculous overkill.

Particularly when ESXi is the way of the future (so you should use it), having two pNICs for a "service console", which equates to a vmkernel, and two more pNICs for vmotion, which equates to another vmkernel, just doesn't make sense.

0 Kudos
Tugboat20111014
Contributor
Contributor
Jump to solution

The suggestions here are good.  However I take it a step further.  What you want is redundancy.  So two additional things to think about.  One make sure your physical NICs are connected to different switches.  Secondly look at the physical architecture of your hardware.  I would split the NICs between internal vs slotted ports.  Or if your hardware has 4 internal NICs they usually have two controller chips.  I would split the vmnics between the controller chips.  While this may be a bit of over kill it does provide for the greatest redundancy.

0 Kudos