Reply to Message

View discussion in a popup

Replying to:
TheKenApp
Contributor
Contributor

The more I read about LBT, Route Based on Physical NIC Load, from what I understand, if the physical NIC load exceeds 75% utilization, it is the VM load that is taken into account for balancing. I am having a hard time determining if other loads, like you mention vMotion, or more importantly in my case the NFS VM storage, is also monitored and balanced if a physical NIC becomes more than 75% saturated.

In are case, I am considering both links being active, which is how our current VM infrastructure is designed (using Rout based on originating VM port).

You said that you use LBT/Physical NIC load only for non VMkernel PortGroups such as VM networks. If you use NFS (or ISCSI for that matter), how do you handle load balancing/NIC teaming?

The blog by Chris Wahl uses LBT/Physical NIC load with NFS. He states "any portgroup will proactively monitor the vmnic utilization in their team and shift workloads around." This would seem to indicate that it is not only VM workloads that are monitored and balanced.

Our NFS arrays are on the same subnet. NFS traffice is not routed, and the hosts, and storage are connected to the same physical switch. There is a 4x 10GB lag on the physical switch to the NFS array, so the bottle neck will be on the host. If I am understanding LBT correctly, the NFS portgroup/vmk in the DvSwitch will use one of the phyiscal NICs in the host (Without LACP on the switch for these ports, not sure how that traffic would be shared between the ports), but other loads, such as ESX management, and vMotion may be moved if the NFS link is over 75% utilization.

Am I understanding how LBT will work with these non-VM loads correctly?

Thanks again.

Reply
0 Kudos