Looking at this article while trying to troubleshoot horrible long distance vmotion performance:
How to tune Long Distance vMotion bandwidth for better performance (for hot data)?
By default, Long Distance vMotion operates on an expected line rate of 1Gbps, and adjust socket buffer size based on this expected line rate. If you environment supports higher line rate, you can change this default expected line rate to your actual line rate by setting the following VSISH option:
vsish -e set config/Migrate/intOpts/NetExpectedLineRateMBps <expected line rate>
For vSphere 6.7u1, you will also be able to increase the maximum socket buffer size by setting the following VSISH option:
vsish -e set /net/tcpip/instances/<netstack>/sbMax <max ideal socket buffer size>
<netstack> should either be the default networking stack, or the vMotion networking stack, depends on which one you were using for vMotion hot data traffic. <max ideal socket buffer size> should be the maximum bandwidth delay product you expect in your environment.
Was curious if these settings take effect immediately or require reboot? If immediate, do they also survive reboot or need to be re-done? I can see the first has a corresponding option in advanced settings, Migrate.NetExpectedLineRateMBps, but when I make the CLI change via vsish, Migrate.NetExpectedLineRateMBps doesn't seem to reflect the new value, so who knows if they really are the same, if both must be set, how one impacts the other, etc.?
The second one, socket buffer size, I can't seem to find any equivalent in advanced settings, so that's curious.