VMware Cloud Community
admin
Immortal
Immortal

4 x 10GB NIC's - Best Practices

Hi Group,

We are upgrading to hosts with 4 x 10gbe interfaces. I have not specifically seen it stated but would it best to create 2 nics for iscsi and 2 for vmotion/management/network or combine all four, treat them as 40gb and use switch based QoS instead? We would do this anyways with the network/vmotion/management links anyways.

I know some of it might come down to preference but since all traffic will be flowing through the same switch anyways I figured it might make as much sense to simply group the 4 interfaces together. I do not see any real caveats to this.

Opinions?

4 Replies
jedijeff
Enthusiast
Enthusiast

Our hosts have 3x dual-port 10gb nics. For a total of 10 ports. How I have divided is as follows:

2 ports(each port on a different physical nic in case a card fails) I have 2 DVS uplinks which are used for all the VLANS for the VMs

2 ports(each port on different physical nic) for VMotion. vmotion on 10gb is pretty nice. You can evacuated a host with 100 vms in minutes.

2 ports(each port on different physical nic) for iscsi.

The above are all on Active/Standby policy.

I use 2 copper ports for management traffic.

Since my traffic is pretty segregated at this point i dont have to do nioc or qos.

dbthree
Enthusiast
Enthusiast

scott_k2003,

This is a very common scenario, since a lot of people end up with 2 x dual-port 10GbE NICs. I have personally setup dozens and dozens of such environments. Unless there is a particular variation that is required, I typical architect the network this way.

If on 5.1, then you can LACP into actual aggregated links. If on an earlier version, then you have the standard load-balancing (outbound) algorithms you can use.

ESXi Switch Design (vDS is preferable with Network IO Control enabled)

  • vSwitch1: 2x10GbE
    • portgroup VLAN for Management (vmnic0 active; vmnic1 standby)
    • portgroup VLAN for vMotion (vmnic1 active; vmnic0 standby)
    • portgroup VLAN for Production (both vmnics active)
  • vSwitch2: 2 x 10GbE (vSS is typically used here, but either is fine)
    • portgroup VLAN for iSCSI with Jumbo frames and flow control full enabled on vmnics, vmknics, vswitch, pswitch, and SAN array

The biggest reason for segregating the traffic is to reduce broadcast traffic across physical NICs and improve overall response time.

There is nothing wrong with aggregating all 4 together, and like I said, it comes down to what is acceptable in your environment given your requirements. For most users, there is a still a general requirement to segregate storage-level traffic on a separate vSwitch.

Dan C. Barber // VCAP // NCIE // CCNP-DC Data Center Solution Architect Presidio www.presidio.com
admin
Immortal
Immortal

Thanks for the helpful replies. I wanted to ensure there were no caveats I was unaware of in aggregating four links together (we are using 5.1) and using VLAN's entirely.

0 Kudos
muddin
Contributor
Contributor

Etherchannel can be a option. Aggregate multiple port.

0 Kudos