VMware Cloud Community
Jujo
Enthusiast
Enthusiast

NICs per vSwitch- Multiple NICS

Hey Guys,

I've searched the forum and I didn't find exactly what I seek. We're preparing to upgrade our ESX environment to HP C-Class and ESX 3.5. We will run 8 nics on our hosts. I've been trying to figure the optimal config for our nics. We're going to use 2 pNICs for service console and vmotion, and probably 2 for iSCSI (just for fun). So now I've been wondering what to do with the 4 nics for vm traffic?

1 vSwitch with all 4 NICs:

with 3 active NICs being load balanced, and 1 standby NIC

OR

2 vSwitches:

with an active/stanby NIC config on each vSwitch. both vSwitches would be on the same vlans, they would just have different names.

I've read here that some have said they've run into problems when putting more than 2 pNICs on a vSwitch. Anyone care to elobaroate or expand on that? Please share your thoughts and experience.

I've also toyed with having redundant Service Consoles...but that's another story.

0 Kudos
16 Replies
mike_laspina
Champion
Champion

Hi,

I am running 4 pNics on a vSwitch without any issues at all so I don't think that post is relevent.

I would have to say that I would base the 2p/2s or 4p/1s config methods on load requirements. If you have heavy traffic then go 4p/1s. Otherwise set up 2p/2s and leave one switch for future use.

Regards,

Mike

vExpert 2009

http://blog.laspina.ca/ vExpert 2009
0 Kudos
Rockapot
Expert
Expert

I agree with Mike.

If your network load is minimal then consider using the extra 2 pNIC's for a DMZ or Backup vSwitch. By Backup vSwitch I am refering to assigning the pNIC's to a seperate physical switch in order to route VM level backups (via a secondary VM level vNIC) off the main physical network infrastructure..

Carl

If you found this or any other answer useful please consider allocating points for helpful or correct answers

0 Kudos
JoJoGabor
Expert
Expert

I've never seen a case where you need more than 2 active pNICs in a vSwitch. I would isloate VMotion traffic for security reasons. Bear in mind that giving 8 NIC connections in a blade environment gets expensive, as each chassis will require 8 network switches. You could reduce this to 4 or 6, how about:

2 NICs: VM traffic and SC

2 NICs: iSCSI

2NICs: Vmotion

Or to reduce to 4NICs combine VMotion with the iSCSI network, then each chassis only requires 4 switches, considering the amount of times a VMotion happens it wont affect your storage traffic that much, and usually your iSCSI traffic is isolated from the normal network

0 Kudos
Rockapot
Expert
Expert

JoJo,

To be frank I would not configure 2 NIC's to share VM traffic and SC even if they are using seperate paths (ie. VM traffic over vmnic1 as active and vmnic2 as failover and SC over vmnic2 as active and vmnic1 as failover). Yes it works however rather do SC/VMotion via this route than SC & VM traffic.

As for placing VMotion traffic on your iSCSI network.. Thats just taking an uneccesarry chance. Whilst VMotion does not occur all the time it does depend on how you have DRS. If its set to aggressive and you have a large amount of VM's with different workloads VMotion will occur still. VMotion is also large burst traffic so its not worth the risk when there are other more suitable virtual network topologies.

Carl

0 Kudos
JoJoGabor
Expert
Expert

I generally share SC with VM traffic for these reasons:

SC traffic is minimal unless using network based backup products such as VCB nbd mode or suchlike. SC requires very little traffic and usually backups run out of hours.

I usually separate SC out if customers have a specific, secured management VLAN or network, but most people don't. Most custoemrs simply want an IP address to connect their vCenter to.

Yeah, maybe the Vmotion/iSCSI is pushing it!

0 Kudos
mike_laspina
Champion
Champion

Sharing iSCSI is not something I would do.

Would you share your SCSI disk cables with network traffic if it was possible? Hmmm.

vExpert 2009

http://blog.laspina.ca/ vExpert 2009
0 Kudos
Rockapot
Expert
Expert

I agree.., keep iSCSI & VMotion seperate, and keep VMNetwork seperate from SC traffic..

Yes there are circumstances where clients have limited pNIC/pSwitch availability however iSCSI and VMotion on the same network is a no no in terms of best practices.., and I hate deploying non best practice environments...!

Carl

0 Kudos
JoJoGabor
Expert
Expert

What are your reasons for keeping SC and VMnetwork traffic separate? Just interested to hear other opinions, the only reason I can think of is security.

0 Kudos
Jujo
Enthusiast
Enthusiast

Guys- iSCSI is just for experimenation. It is not a requirement, we use fiber connected SAN.

I've been reviewing our ntwk usage on our current ESX hosts. It's very low. So low in fact that I'm starting to think we probably be fine with just 2 pNICs assigned for VM traffic- even with 20 VMs running on just 2 pNICs in an active/stanby. That would probably work but just seems risky... I don't want to go crazy on NICs just because they're available. We're looking at roughly a 20 to 1 ratio.

0 Kudos
Rockapot
Expert
Expert

1.> Best Practices

2.> Security (VMotion traffic is not secured and as such can be sniffed on a network)

3.> Reduce the risk of VMotion traffic interfering with SC traffic - VMotion traffic is typical of high bursts

and..., on points 3 I have seen a global corporation have major issues as a result of VMotion traffic on their SC network..(they are fixed now)

Carl

0 Kudos
Rockapot
Expert
Expert

Jujo,

Remember that VM --> VM traffic on the same ESX host (same vSwitch & Port Group) will not traverse the pNIC's so pNIC usage is not heavy unless it is going back to the physical switch..

This is another factor in VM workload placement during the design phase which can help reduce the load on the physical interfaces.

I rarely see a client having bandwidth issues when they are using 2 pNIC's although as NIC's are fairly cheap it is not uncommong to see 4 pNIC's allocated to a vSwitch for VM Network traffic. I wouldnt worry about it in a small environment with 20 - 50 VM's of "average" usage.

Carl

0 Kudos
Rockapot
Expert
Expert

Anyway, my points been made, and with-in best practices, plus i am in the UK, its 22:51 on a Friday night so I am off to lala land..... Smiley Wink

Carl

0 Kudos
Jujo
Enthusiast
Enthusiast

Nevermind the iSCSI stuff, I'm much more concerned about VM traffic at this point.

I guess I could run 2 pNICs per vswitch in an active/stanby config and get away with it but I worry about backups slowing things down overnight, because there'd only be 2 live nics. Its probably best/safest to run 4 pNICs for the VM traffic. Whether on 1 vSwitch or 2. It seems I would get the most benefit from having 2 pNICs active and load balanced at all times, running VM traffic.

Having 4 pNICs on a vSwitch with 2 active and 2 standby...seems like I'd be wasting a NIC- why have 2 standby NICs? Why not put 3 pNICs as active and have 1 standby?

The other option is to use 2 vSwitches both with 1 active/1 standby NIC. That looks like it might be unnecessarily compliacting matters though. I could achieve the same throughput, load balancing, and failover with 1 vSwitch with all 4 pNICs attached.

Thoughts on that? Especially the 3 active/ 1 standby config.

0 Kudos
kjb007
Immortal
Immortal

Not sure why you would run your pNICs in active/standby for vm traffic. They should both be active. Active/standby technique should be for vmotion/service console type traffic, not for virtual machines.

I would also not put more than 2 pNICs into a vSwitch. Troubleshooting gets substantially more difficult when you add 4+ NICs into a team. If you have multiple VLANs, then you can spread those VLANs over multiple vSwitch, with 2 pNIC uplinks from each vSwitch. You can still do that the sameVLAN, but you would have to split the vm's manually, which can get a bit tedious and difficult to maintain. In that scenario, I'd add all 4 NICs to a vSwitch, but I run at least a 20:1 ration, and have never had a problem with 2 pNIC. As you noticed, even 1 pNIC is typically enough to handle that load, and two both active and providing redundancy is even better.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Jujo
Enthusiast
Enthusiast

After further review, it seems I can run all nics active on the new c Class setup. We couldn't do that with out p Class setup. So this is a bit new to me. No need for any standby nics when we can run multiple active nics.

Why does troubleshooting get more difficult with 4 pNics on a single vSwitch? If you lose a pNic or a pair of pNics - why is it tougher to figure out. Please explain. Thanks!

0 Kudos
kjb007
Immortal
Immortal

The problems I've run into personally is when there are problems with the paths themselves, as opposed to link failure. If you have all 4 NICs active in a team, then you don't really know which one a vm is using at any given point. Granted that is the same as when you have 2 active paths, but that's only a 50/50 choice between uplink switches. Now with 4, you have 2 paths per pSwitch, and trying to figure out why link is bad, when they're not actually down gets difficult. If you have one vm showing networking trouble, but you have 4 uplinks to deal with, and multiple per switch. You have to figure out if it's a switch problem, if it's a vm problem, if it's a path problem. It gets at least slightly easier to figure out when there's only one interface per switch. Then, it's either one path, or the other, or one switch or the other. Again, it can be difficult to pinpoint problems with just two interfaces. Especially if you have a complicated networking configuration, with access/distribution/core switches, as well as routers, etc. Adding multiple paths to each makes that troubleshooting more difficult.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB