Although L3 and L2 are supported for a vSAN, I would be hear from you what are the benefits and drawbacks of using each network layer when deploying a vSAN Metro Stretched Cluster into a greenfield environment.
Customer does not have NSX licensing yet, but have all vSphere and vSAN Enterprise plus licensing, also cisco 9k and 5k leaf and aggregations in place over two sites that fulfill all latencies and bandwidth network requirements.
This customer initially has one vCenter license instance with means that it will manage both preferred and secondary sites. The idea is create only one VDS in this initial design avoiding complexity.
Another questions related with this same customer scenario, is: if we decide to go via L3 with vSAN vmkernel stack routing, what are the impacts in going with this same approach for management and vMotion networks stacks? i.e. in an hot vmotion migration of an VM connected with an vmnetwork portgroup, it will transferred via L3, but what when it lands at other site broadcast domain? It will works only with cold migration provisioning vmkernel?
Thanks in advance!! Cheers!
If you possibly can do L3, that's what I'd recommend because it avoids having to maintain static routes on all the vSAN and witness nodes which can be a pain point particularly when scaling the cluster (the forgetting about the routes). As far as management and vMotion, they're fine with it so long as vCenter, at your management site, can see everything. vMotion should go on its own TCP/IP stack so the routing can be controlled. A cold migration does not use vMotion at all and instead traverses the management vmkernels as a network file copy (NFC) job. You must therefore have connectivity between source and destination hosts.
daphnissov is correct that vSAN can/does support L3, but static routes are required, because vSAN uses the same TCP stack as the Management VMkernel interface.
If you're deploying a "normal" Stretched Cluster, you'll have to use static routes to connect the "backend" vSAN VMkernel interfaces to the vSAN Witness Host's vSAN VMkernel interface.
If you're deploying vSAN 6.7 or higher, Witness Traffic Separation is an alternative mechanism. Today, this could require static routes, or possibly not.
Witness Traffic Separation enables communication with the vSAN Witness Host over an alternate VMkernel interface (requires the "witness" traffic type, which is set from the command line). Additionally, as of vSAN 6.7 U1, you can even use Mixed MTU values for the "backend" and "frontend" vSAN networking.
I have a couple different blog posts on WTS here:
Understanding Mixed MTU support in Stretched & 2 Node vSAN 6.7 U1
And you can find more about WTS on StorageHub in the Stretched Cluster Guide under New Concepts.
vSAN Stretched Cluster Guide
Some folks consider tagging "witness" on the Management VMkernel interface to avoid the requirement for any static routes to be configured.
Some would argue that this isn't a great practice. The jury is out on that, but if you consider that, you want to make sure the Management network is (as any Management network should be) isolated from non-administrative traffic.