Hi,
A bit of context...
For VM traffic only one Distributed Switch is configured and presented to all hosts in all clusters. VM's will then connect to Distributed Port Groups as required and communicate with other VM's on same or different Distributed Port Group connected to the same Distributed Switch irrespective on which host/cluster they reside.
For Management I think the same as above would apply best (one Distributed Switch for all clusters / per Data center) for ease of management.
now the question if you agree with the above...?
For vSAN, a Distributed Port Group is created and the vmkernel/s attached to it. Hosts are expected to communicate to the vSAN storage which is local to the cluster. For this reason should a Distributed Switch be created per cluster and presented to all hosts in the cluster?
If you have enough physical nics you can create one Distributed Switch for Management and VM traffic and a separate one for vSAN traffic only.
Yes have 6x nics so thinking to dedicate a dvs per functionality except management and vMotion which will share same one but alternate primary uplink.
Does vSAN require specific Nic’s or as long as nic is on VCG it’s automatically good for vSAN?
Hello andvm
vSAN HCL is only for controllers, SSDs, HDDs and NVMe devices - for any other component (including NICs) go with the ESXi VCG.
Bob