Well as an alternative to port bridging and daisy chaining, I have also been entertaining the idea of trying dual-port 10 GbE adapters on all 3 hosts and cabling them so each host nic port connects to the opposite 2 hosts, set them, all up as vDS uplinks, but this will restrict the layer 2 traffic from interacting across each of the 3 links:
So the only problem there is, as per VSAN network requirements:
https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc%2FGUID-8408319D-CA53-4241-A3E4-70057F70030F.html (the last 2 bullets from that URL I pasted here):
-Virtual SAN does not support multiple VMkernel adapters on the same subnet for load balancing. Multiple VMkernel adapters on different networks, such as VLAN or separate physical fabric, are supported.
-You should connect all hosts participating in Virtual SAN to a single L2 network, which has multicast (IGMP snooping) enabled. If the hosts participating in Virtual SAN span across multiple switches or even across L3 boundaries, you must ensure that your network is configured correctly to enable multicast connectivity. You can change multicast addresses from the defaults if your network environment requires, or if you are running multiple Virtual SAN clusters on the same L2 network.
More on that here: http://cormachogan.com/2014/01/21/vsan-part-15-multicast-requirement-for-networking-misconfiguration-detected/
So that is the part I am not really sure about. How important is the IGMP snooping traffic? Can't it still get across to each of the 3 hosts even though there are 3 separate links? The hosts will all be able to talk to each other, but the different networks they use will be isolated.
STP is not an issue because vSwitches just don't cause loops (vSwitches don't forward broadcast packets). I currently use LBT (no etherchannel, no LACP, no Spanning Tree, nothing) with both uplinks in a vDS (which are really just 3 hidden vSS underneath them on each host), and with the switch running straight open with no configuration and ESXi never causes loops, and LBT just connects the traffic to the uplink with lowest utilization.
But my theory is that is per the first bullet point above, If I create SEPARATE VMkernel adapters per host, so they each connect to their respective uplink, and it should work that way. But again, if VSAN is expecting to pass IGMP across all links I am not sure how this will behave, because I do not have a firm grasp on that protocol or what they are using it for.
EDIT:
Again, if the Proof of Concept shows it works, then you can add a second 10 GbE card in other PCIe slot and set up a second set of paths to each host for network redundancy, using LBT if that would work..