In reviewing our 3 host ESXI system, I'm finding that many of the Virtual Machine port groups have various Teaming/Failover settings configured. Some of them have both adapters as active/active while others have them configured as active/standby. Some are set at the vSwitch level and some are being overridden at the lower level of the vswitch. The physical connections to the switches are not lagged and they are 10GB copper links. Any suggestions or thoughts on cleaning up the mess as I would like have a consistent network policy at this level.
There's no rule of thumb for this, as it depends on what the uplinks are used for (Management, VMs, vMotion, iSCSI, NAS, ...).
For distributed vSwitches it's basically easy, because it contains QOS, but standard vSwitches do not.
If you share the usage, I will be glad to share my thoughts.
Thanks for the reply and they are standard vswitches being used to carry virtual machine traffic. Switch up links are split between a pair of 10GB copper interfaces configured as access ports
If it's only for Management, and VM traffic (i.e. no vMotion, or storage traffic), I don't really see a reason from the ESXi side, why all the vmnics shouldn't be set to active on the vSwitch, with all settings inherited on the port groups.