VMware Cloud Community
glab75
Enthusiast
Enthusiast
Jump to solution

virtual distributed switch NIC teaming recommendation

We have a vSphere cluster using NFS datastores.  Each node has 4x 10Gb ports and running Enterprise Plus license.

For the network config, I am planning to have Network IO control enabled on the single vDS object then create the DvUplink group using all 4x 10Gb adapters configured as trunk ports on the network switch.  All VLAN tagged port groups (management, vMotion, IP Storage, VM Networks) will be configured with all 4x 10Gb uplinks as ACTIVE and 'Route based on originating port' for load balancing.

The other option would be dedicating 2x 10Gb NICS for NFS storage traffic only, then place all other port groups under the remaining 2x 10Gb trunk ports.

I think having all 4 active is the way to go, but I'm curious if anyone has comments or feedback as to why this wouldn't be a good option?

Feedback appreciated.  Thanks

Tags (1)
1 Solution

Accepted Solutions
NicolasAlauzet
Jump to solution

Yes. I would keep the same approach.

NFS will always use 1 active adapter even if you have two active in the porgroup.

I would keep that 2-2 approach (management, vmotion, etc and nfs for 2, and vms for other 2)

But also, what you can do for example, if you want to be certain that full 10 gb are for nfs, is:

Mgmt, vmotion, etc: Uplink 1 Active, 2, 3, 4 standby
NFS: Uplink 2 Actuve, 1 3 4 standbyVMs: Upkinks 3,4 Active, 1,2 standby

Also Network I/O control can help you in the 2/2 config.

There are many possible configurations an depending on your workflows some of them are more fit, but I always like to keep things simple and use the software for your advance (network i/o control and Physical NIC Load for example)

-------------------------------------------------------------------
Triple VCIX (CMA-NV-DCV) | vExpert | MCSE | CCNA

View solution in original post

3 Replies
NicolasAlauzet
Jump to solution

Hi there,

Question: Are you using NFS Multiplathing?  If you have multipath you can use 4 active adapters and I would suggest for VM traffic portgroups to use Physical NIC Load for Load Balancing.

If not, most of the times there is a 2 - 2 adapter utilization. For example:

Uplink 1 and 2 active, 3,4 Standby for Mangement, vmotion and storage traffic.

Uplink 3 and 4 active and 1,2 Standby for virtual machine traffic.

But I would suggest you to analize your workloads to try to adapt your desition to your actual requirement or need. If your workload are heavy IOPs consumers, maybe the dedicated for storage is the best option. If your vms have a high traffic utilization I think the one I listed would addapt the best.

Always a good read: Networking Best Practices

Hope that helps,

Cheers

-------------------------------------------------------------------
Triple VCIX (CMA-NV-DCV) | vExpert | MCSE | CCNA
0 Kudos
glab75
Enthusiast
Enthusiast
Jump to solution

Thanks for your feedback.  The NFS shares are v3 so multi-pathing won't come into play.  There is similar load for both VM's and storage IO.  This being said, would you still opt for the 2 active and 2 standby setup you mentioned? 

0 Kudos
NicolasAlauzet
Jump to solution

Yes. I would keep the same approach.

NFS will always use 1 active adapter even if you have two active in the porgroup.

I would keep that 2-2 approach (management, vmotion, etc and nfs for 2, and vms for other 2)

But also, what you can do for example, if you want to be certain that full 10 gb are for nfs, is:

Mgmt, vmotion, etc: Uplink 1 Active, 2, 3, 4 standby
NFS: Uplink 2 Actuve, 1 3 4 standbyVMs: Upkinks 3,4 Active, 1,2 standby

Also Network I/O control can help you in the 2/2 config.

There are many possible configurations an depending on your workflows some of them are more fit, but I always like to keep things simple and use the software for your advance (network i/o control and Physical NIC Load for example)

-------------------------------------------------------------------
Triple VCIX (CMA-NV-DCV) | vExpert | MCSE | CCNA