VMware Cloud Community
GlennL10
Contributor
Contributor
Jump to solution

10Gb NICs with vSwitch

We have a Blade chassis with two 10Gb NICs (vmnic0 and vmnic1), using Standard vSwitches, and Enterprise licensing (no distributed vSwitch), running vSphere 5u1.

What would you recommend in order to set this up considering we need only these 3 networks (they are in separate VLANs):

vSphere Management

vMotion

VM Network

Would you have a single vSwitch with 3 port groups, or 2/3 separate vSwitches for each network?


What NIC settings would you recommend for failover, teaming etc.?

Thanks

0 Kudos
1 Solution

Accepted Solutions
MKguy
Virtuoso
Virtuoso
Jump to solution

What kind of blades do you have?

You can't create 3 vSwitches because a physical NIC can only be part of  one vSwitch, and creating 2 vSwitches means losing redundancy because of the same fact.

In your case, there doesn't seem to be of a choice. A single vSwitch with both uplinks and the 3 port groups is pretty much the only way to go.

If you had Ent+ with dvSwitches, you could make your life easier by utilizing Network IO Control to handle and limit the different types of traffic efficiently. But as you don't it's best to set up an active/standby failover teaming config where management and vMotion go primarily through NIC1, and VM port groups through NIC2.

Given that you have 10Gb interfaces, this should be more than sufficient even with only one (primary) interface for each function.

-- http://alpacapowered.wordpress.com

View solution in original post

0 Kudos
4 Replies
MKguy
Virtuoso
Virtuoso
Jump to solution

What kind of blades do you have?

You can't create 3 vSwitches because a physical NIC can only be part of  one vSwitch, and creating 2 vSwitches means losing redundancy because of the same fact.

In your case, there doesn't seem to be of a choice. A single vSwitch with both uplinks and the 3 port groups is pretty much the only way to go.

If you had Ent+ with dvSwitches, you could make your life easier by utilizing Network IO Control to handle and limit the different types of traffic efficiently. But as you don't it's best to set up an active/standby failover teaming config where management and vMotion go primarily through NIC1, and VM port groups through NIC2.

Given that you have 10Gb interfaces, this should be more than sufficient even with only one (primary) interface for each function.

-- http://alpacapowered.wordpress.com
0 Kudos
GlennL10
Contributor
Contributor
Jump to solution

Thanks for the reply.

We have an IBM BladeCenter. It does have two 1Gb NICs as well, but we not planning to use that as the two 10Gb NICs should be more than enough for our environment.

Yes that's true, I forgot that the pNICs can only be assigned to one vSwitch at a time. That makes it simple then, one vSwitch it is.

We do have a Microsoft NLB cluster on two virtual machines in multicast mode, so what do you recommend for the settings such as notify switches, promiscuous mode, forged transits etc. considering all of the above?

0 Kudos
MKguy
Virtuoso
Virtuoso
Jump to solution

We do have a Microsoft NLB cluster on two virtual machines in multicast  mode, so what do you recommend for the settings such as notify switches,  promiscuous mode, forged transits etc. considering all of the above?

I recommend just going with working and supported defaults for those settings (should be everything enabled except promiscuous mode). Multicast NLB, unlike the Unicast version, does not require any special adjustments on the ESX(i) side here. We run it like this too without issues too.

See Microsoft Network Load Balancing Multicast and Unicast operation modes and Sample Configuration - Network Load Balancing (NLB) Multicast Mode Configuration.

-- http://alpacapowered.wordpress.com
GlennL10
Contributor
Contributor
Jump to solution

Thanks for your replies, that all sounds good. Smiley Happy

0 Kudos