VMware Cloud Community
TheVMinator
Expert
Expert
Jump to solution

Best design for using 8 physical NICs on an ESXi 5.1 host

I have 8 physical NICs to work with on and ESXi 5.1 host using Enterprise Plus licensing.  I need to service the following traffic:

Management traffic

vMotion traffic

Virtual machine traffic (Probably 2 pnics will suffice for this)

NFS traffic

Fault tolerance will NOT be used.  How many pnics should I dedicate to NFS, vMotion and management traffic?  What failover policy should I use (active / active) (active/standby) for each?

This is enterprise plus licensing and vSphere Distributed Switches are being used.

Thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
mrlesmithjr
Enthusiast
Enthusiast
Jump to solution

Yes I would place the backup traffic within the same switch that management and vMotion is located whether using vSS or vDS. The other option would be to scale down your vDS for NFS Traffic from 4 pNICS to 2 pNICS and then either add them to the existing vDS which contains management, vMotion and VM traffic to add additional bandwidth there or create a new vDS with those 2 pnics. But to the point of my original thought around creating one vDS is based on throwing all of my bandwidth together and then carving it out however I want it and not having to swap around pNICS after the fact. But there are so many different ways to accomplish this which is the fun part.

everythingshouldbevirtual.com @mrlesmithjr

View solution in original post

Reply
0 Kudos
9 Replies
logiboy123
Expert
Expert
Jump to solution

I'm presuming that you are using 1Gb uplinks.

If I were looking to make the most use of all the uplinks then I would do something like the following;

vSS0 - Standard Virtual Switch - 3 uplinks

Management - vmk0/vmk3/vmk7 active/standby/standby

vMotion1 - vmk0/vmk3/vmk7 - unused/active/standby

vMotion2 - vmk0/vmk3/vmk7 - unused/standby/active

vDS1 - Distributed Virtual Switch - 3 uplinks - NIOC enabled - Route based on physical NIC load

VM Networking

vDS2 - Distributed Virtual Switch - 2 uplinks - NIOC/SIOC enabled - Route based on IP HASH - LACP enabled

NFS

There are a lot of configuration you could use. This isn't a simple design, but it will give you maximum throughput for each of your traffic types. I'm presuming that you have a VLAN available for each traffic type; management, vMotion, VM Networks and NFS. I'm further presuming that you are not routing NFS traffic.

I don't like putting management on a vDS that vCenter manages or runs from, I don't even like putting the management network on a vDS. In a 10GbE environment when you usually have only 2 uplinks there isn't a choice in this matter, but in a 1Gb environment you do have a choice. Host profiles and vDS do not always talk nicely to each other, so if you lose the management network applying the rest of the host profile configuration will fail and you will need to manually re-add the host.

Cheers,

Paul

TomHowarth
Leadership
Leadership
Jump to solution

With 8 NIC's I would do something like this.

standard virtual switch 2 nics

Mgmt active/standby

vMotion Standby/active

your fun starts with the splitting of the remaining 6 nics

depending on production traffic (assuming low to average)

distributed virtual switch

2 x NICs for production traffic

4 times NICs for NFS Storage.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
Reply
0 Kudos
mrlesmithjr
Enthusiast
Enthusiast
Jump to solution

Have a look at the link below for some ideas. With 8 nics I would do something like this. This scenario will assume that all VLAN(s) required are allowed on all eight uplinks. If not then you will need to create more than one vDS (vSphere Distributed Switch). Also do not worry about placing management on the vDS as it is very low risk these days. Also this is assumed that your uplinks are 1GB each. NFS does not allow for MPIO and even though you can do LACP on your switch and within the vDS port group for your NFS traffic it can over complicate it. However without knowing the number of workloads and potential IO requirements it may justify going down the LACP route for your NFS traffic.

dvSwitch-NIOC enabled

dvPortGroup-MGMT (dvUplink1, dvUplink2, dvUplink3, dvUplink4, dvUplink5, dvUplink6) - All uplinks active and Route based on Physical NIC Load

dvPortGroup-vMotion-1 Active - (dvUplink2) -- Standby (dvUplink1, dvUplink3, dvUplink4, dvUplink5, dvUplink6)

dvPortGroup-vMotion-2 Active - (dvUplink3) -- Standby (dvUplink1, dvUplink2, dvUplink4, dvUplink5, dvUplink6)

dvPortGroup-Guests  Active - (dvUplink1, dvUplink2, dvUplink3, dvUplink4, dvUplink5, dvUplink6) - All uplinks active and Route based on Physical NIC Load

dvPortGroup-NFS Active - (dvUplink7) -- Standby (dvUplink8) without LACP

dvPortGroup-NFS Active - (dvUplink5, dvUplink6, dvUplink7, dvUplink8) - Route Based on IP Hash (move dvUplink5 and dvUplink6 from other port groups to unused)

Obviously there are many other ways to lay all of this out so have fun for sure.

http://everythingshouldbevirtual.com/vsphere-5-1-network-designs

everythingshouldbevirtual.com @mrlesmithjr
Reply
0 Kudos
logiboy123
Expert
Expert
Jump to solution

If the portgroups for Management, vMotion and VM Networking never use uplinks 7 and 8 then you might as well create a separate vDS dedicated for NFS. That would be cleaner, easier to understand and more simple to configure. I like this design actually or a variation I'll show below, but it still means that management portgroup is on the vDS. I really don't like this at all. People say it is safe but if vCenter is on one of the hosts that it is managing then a cold restart of your environment is going to be quite a pain. Further host profiles and vDS are both trying to control the networking (unless you exclude some portions of the networking config from the host profile) and this almost always makes life harder. It's something you probably have to experience on a large scale environment to truly appreciate how annoying it is. Environments with less then 20 hosts it probably isn't much of an issue because you can take the extra time required to get it right and it doesn't blow out the total time required to implement the entire solution.

Also when you put portgroups on a vDS you have to recognize that those portgroups are now available for VMs to be attached to. Every environment I've ever worked in where mgmt, vMotion, FT were on a vDS with VM Networking some idiot connects several VMs up to portgroups they shouldn't be using. Quite a security issue that, not sure why people just don't think.

The following design is a derivative of the mrlesmithjr design. It requires a separate VLAN for each traffic type. For each subsequent VM Networking portgroup the configuration remains the same, only the VLAN changes.

vDS - NIOC - Route based on physical NIC load - 4 NICs Total

dvpg-Management - dvuplink1-4 all active

dvpg-vMotion1 - dvuplink2 active, dvuplink1,3,4 standby

dvpg-vMotion2 - dvuplink3 active, dvuplink1,2,4 standby

dvpg-VMNetwork1 - dvuplink1-4 all active

dvpg-VMNetwork2 - dvuplink1-4 all active

vDS - NIOC & SIOC - Route based on IP HASH - LACP Enabled - 4 NICs Total

dvpg-NFS - dvuplink1-4 all active

If you environment is big enough you should consider having a management cluster where high level servers like vCenter, SSO, SQL live. This Management cluster is still the recommended best practice implementation from VMware. For this cluster you would stick to only using standard virtual switches for management to make it the most simple configuration. This will help in the event of a cold start and/or major maintenance tasks. If you have a management cluster then the risks associated with having management portgroups on a vDS are drastically reduced.

Cheers,

Paul

Reply
0 Kudos
TheVMinator
Expert
Expert
Jump to solution

OK thanks for the input.  In this case there will be a management cluster.  Host profiles are being used but networking is being disabled in them so that host profiles don't attempt to standardize vDS application and configuration due to having already discovered the issues you are referring to.

Question to Paul - in this setup:

vSS0 - Standard Virtual Switch - 3 uplinks

Management - vmk0/vmk3/vmk7 active/standby/standby

vMotion1 - vmk0/vmk3/vmk7 - unused/active/standby

vMotion2 - vmk0/vmk3/vmk7 - unused/standby/active

I'm not quite clear as to where things are mapping in this design. To make sure I understand I redid this based on the vmks and physical nics.  Is this the same as what you were thinking?

vmk0 - Management port group - active on pnic0 / standby on pnic1 / standby on pnic2

vmk1 - vmotion1 port group - unused for pnic0 / active on pnic1 / standby on pnic2

vmk2 - vmotion2 port group - unused on pnic0 / standby on pnic1 / active on pnic2

Also, are you assuming the vmotion will use two separate subnets?

Thanks for the great input.

Reply
0 Kudos
logiboy123
Expert
Expert
Jump to solution

Yes. That is the layout I was thinking of.

The reason we use two vMotion vmkernels is that this allows 2Gb of vMotion traffic throughput in total. If we only had one vMotion vmkernel then we would only be able to achieve 1Gb maximum throughput for vMotion events.

You can use a single VLAN and subnet for the vMotion vmkernels. But each host will have two vMotion IP addresses.

Cheers,

Paul

TheVMinator
Expert
Expert
Jump to solution

If I want to add backup traffic into the mix of Paul's design, what would be the best way to do that?  My backup traffic, such as traffic to backup virtual machines that runs nightly, is going to be on its own subnet.  Should I include that on the switch that is going to handle management and vmotion traffic, or elsewhere?

Reply
0 Kudos
mrlesmithjr
Enthusiast
Enthusiast
Jump to solution

Yes I would place the backup traffic within the same switch that management and vMotion is located whether using vSS or vDS. The other option would be to scale down your vDS for NFS Traffic from 4 pNICS to 2 pNICS and then either add them to the existing vDS which contains management, vMotion and VM traffic to add additional bandwidth there or create a new vDS with those 2 pnics. But to the point of my original thought around creating one vDS is based on throwing all of my bandwidth together and then carving it out however I want it and not having to swap around pNICS after the fact. But there are so many different ways to accomplish this which is the fun part.

everythingshouldbevirtual.com @mrlesmithjr
Reply
0 Kudos
TheVMinator
Expert
Expert
Jump to solution


I'm thinking perhaps in order to separate backup traffice from production traffic I would do the following:

vDS1 - pnic0 and 1 - management vmk and backup traffic

vDS2 - pnic2 and 3 - vmotion traffic and associated vmks

vDS3- pnic4 and 5 - production traffic

vDS4-pnic6 and 7 - nfs traffic and its vmk

This would let me use two vmks for vmotion and get 2Gb of bandwidth. Also mangement would have 2 nics to use.  Backup traffic would have 2 nics to use and redundancy.  NFS traffic could probably do with 2 nics and has redundancy.  Production traffic would have redundancy.  Any issues here?

Reply
0 Kudos