VMware Cloud Community
polysulfide
Expert
Expert

Best Practice for Multiple NICs

In my current pre-production test environment, I have dual-port NICs which are configured as a trunk at the switch level and use a matching load balance algortihm at the ESX team level. This works great as I am able to tag vlans onto my network, have a 2 Gbps connection and have failover capability.

I'm ordering my production boxes with a quad-port adapter in addition to the on-board dual.

My initial though was that I would trunk all four of the quads, use one of the on-boards as a service console and another of the on-boards for HA

Now I'm wondering if I should setup 3 trunks, one for vmnetworks, one for HA, and another for something else.

Since I have redundant core switches, I was also thinking I could make a 2-port trunk on one switch, a 2-port trunk on another switch and team those trunks together. Since I'll never push 4Gbps this would give me a 2Gbps redundant connection with switch redundancy but I don't think you can configure ESX to support that.

I'm curious what others have done to make the best use of 6 Gi adapters. I'm using FC storage so I don't need a storage network and I'm using VCB so I don't really need a backup network.

Any and all input is welcome. Thanks!

0 Kudos
2 Replies
ctfoster
Expert
Expert

Networking not my strong point but I think this would work so long as the trunk ports are aggregated across the switches. The switches would have to 'stacked' on the backplane and operate as a single logical unit rather than just uplinked. Using NICs in the way you suggest is not uncommon.

0 Kudos
khughes
Virtuoso
Virtuoso

We run 6 x 1000Mbps NIC's in each of our hosts along with FC connections to our SANs. We have them configured now for 4 of the 6 on our production network, including vmotion, console, etc... and the other 2 going to our DMZ (we really only need one but its for redundancy). Along with 4 1Gbps connections tied together, we have 2 of the 4 going to one switch, and the other 2 going to another switch incase of switch failure. This setup has worked pretty well for us, the only thing I would change now looking back and learning more is pulled 1 NIC out of the production network and set it aside for the vmotion and all the other stuff. Hope that info helps

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "