VMware Cloud Community
vmproteau
Enthusiast
Enthusiast

pNIC-vSwitch best practice

We're moving locations and will be building fresh at the destinaton.

Current environment:

Our clusters are made up of HP Proliant DL380 G7 servers. We're happy with them and the size and resource capacities hit the sweet spot for our environment. There are 4 on-board NICs and 4 available PCI-Express slots populated like this:

Slot-1     4-port NIC (1GB)

Slot-2     4-port NIC (1GB)

Slot-3     1-port FC HBA

Slot-4     1-port FC HBA

I have a pair of interfaces for 4 separate vSwitches. The remaining 4 unused NICs are available for whatever.

2-Service Console

2-VMKernel\VMotion

2-Fault Tolerance

2-Virtual Machines

Proposed environment: We're thinking of designing the new location essentially the same but are likely going 10GbE. If we stick with the HP DL380 we'll need to make a vSwitch decision because of pNIC limitations.

Slot-1 10GbE NIC

Slot-2 10GbE NIC

Slot-3 1-port FC HBA

Slot-4 1-port FC HBA

Now with only 6-pNIC interfaces, a pair will need to support multiple roles. I was thinking I would put VMKernel\VMotion and Virtual Machine traffic on the 10GbE. I like keeping Management completely separate and I think Fault Tolerance should be separate.

Any other ideas, thoughts or designs that have an advantage over others? I am looking at servers with additional slots but, I'd like opininions about this specific scenerio.

0 Kudos
9 Replies
AndreTheGiant
Immortal
Immortal

To be independed by the number of pNIC couple, I suggest to move to VLANs.

In this way you can build how many network you need and also isolate FT, vMotion and Management.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
vmproteau
Enthusiast
Enthusiast

We use VLANs and all are isolated in that respect but, there are advantages to segregating at the physical NIC layer as well. In particular, it's recommended that FT have it's own physical NIC pair. I know I can share any of these but, I was just looking for arguments for/against any of the possibilities.

0 Kudos
AndreTheGiant
Immortal
Immortal

With only two NICs you cannot dedicate a NIC to only one service like FT.

If you have QoS at switch layer you can allocate bandwith or priority.

Or, simple but not optmimal, you can choose that some portgroup goes to one NIC and the other on the second (of course in both cases one NIC is active and the second is standby).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
vmproteau
Enthusiast
Enthusiast

I won't be building any Hosts with only 2 NICs. Even the smallest server class systems have 4-onboard. Also, we build for limited or no single points of failure.

Assume the system supports separating 3 of these (SC, VMotion, VM Traffic, and FT) on separate NICs. My question is simply if I am forced to share a pair of interfaces for 2 of these, is there a best or more logical choice.

0 Kudos
AndreTheGiant
Immortal
Immortal

Yes, you can share vMotion with FT (if you do not plan to use FT very much) or share Management with vMotion.

But IMHO, in a similar case (if you plan to use FT for several VMs) I prefer share the Manament with the same NIC of the VMs.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
Mouhamad
Expert
Expert

Hello,

I'd suggest a diffrent scenario, might not be easier but since you are doing the change and bringing the host down. You'd also rebuild this host with ESXi instead of ESX since VMware will not release a new versions of ESX anymore.

ESXi as you might know doesn't require a service console. So you can do the following:

Slot-1 10GbE NIC (VMs)

Slot-2 10GbE NIC (VMs)

2 built-in (VMKernel)

2 built-in (FT)

Regards,

VCP-DCV, VCP-DT, VCAP-DCD, VSP, VTSP
0 Kudos
vmproteau
Enthusiast
Enthusiast

Yes I am using ESXi currently for the majority of my environment. So in your scenerio are you putting the Mangement traffic and Vmotion over the VMkernel interfaces?

0 Kudos
Mouhamad
Expert
Expert

Since you have 10 GB ports, why don't you enable the vMotion on it, with the VMs PG?

The question, are you really using FT?

VCP-DCV, VCP-DT, VCAP-DCD, VSP, VTSP
0 Kudos
vmproteau
Enthusiast
Enthusiast

I'm not sure what you mean by enabling VMotion on the VM's port group. Do you mean add a VMkernel port group to vSwitch handling VM traffic?

I'm open to all configurations. My initial design does combine Vmotion and VMs on the 10GbE interfaces however, my preference is to keep VMotion on separate interfaces for troubleshooting purposes.

All I'm doing here is looking for alternative design opinions while still in the in the planning phase.

Regarding Fault Tolerance, use is limited due to the current version FT limitations (specifically no SMP support) but, I definately need to design for increased use.

0 Kudos