I'm in the process of upgrading my hardware infrastructure and scale up from 1GbE network (and scale out to 3x ESXi nodes from the current 2x).
One of the objectives is to enable FT on a couple extremely critical VMs but I cannot find a definitive answer in the docs about what is mandatory and what is just best practice.
Since reducing cables would be a plus in a vacuum, and even more so while scaling out for obvious reasons, I wonder if putting all three vmkernel adapters (vMotion, Management and Fault Tolerance) on the same vSwitch would be a viable solution AND a supported one, provided the switch would be backed by two 25GbE per host (and other two for my VM networks on another vSwitch).
I include my current config as images.
Thanks for reading,
Taken from here - Fault Tolerance Requirements, Limits, and Licensing
"Use a 10-Gbit logging network for FT and verify that the network is low latency. A dedicated FT network is highly recommended." which as you can see says recommended rather than required.
However, there is no way I would be doing this if I was using standard virtual switches as the vMotion traffic could / would swamp the network without any form of Network I/O control (available with a distributed switch).
Definitely your configuration is supported for running those 3 VMkernels with the same two adapters. I recommend you as per best practice to Isolate these traffics by using 3 different VLANs and do the same for the VM Networks if you are not doing that yet.
Depends of the traffic that is going thorugh the adapters, you can see an impact on the network performance but this is not expected and it should not saturate the links. However, you can use one uplink as Active and the second one as Standby and you can play with that order for the different portgroups. I've seen infrastructures using vMotion, FT and Storage Traffic over 10Gbps links without Network I/O Control and having more than 1000 VMs without any issues.