Will this work with just 2 NICs per physical node? The VXRails come with 2x 10gb nics and this is as per the Dell/VMware RA for EUC. However the need for NSX with my pea sized brain, does not compute. I have the normal requirements for Management, vMotion, vSAN etc. However, to keep effective separation between Horizon server and desktop pools, I also planned separate DLR with separate Edges for server and desktop pools each VXLAN trunked to the TOR switch (using the switch as a VTEP gateway).
You can certainly leverage default NICs (2) for N-S and E-W traffic ,so Transit VLAN,VXLAN and all other traffic will exit via same set of interfaces which can be a potential bottleneck scenario based on the real use case also troubleshooting will be slightly difficult whenever situation demands .For DFW there are no changes happening at DVS level , ideally I prefer keeping VSAN/Management Traffic dedicated to default DVS/NICs and have 2-4 additional PCI slots for NSX traffic, this is one of the design I have implemented recently considering the use case. For VDI - micro segmentation is a good candidate and like you mentioned we can leverage DLR/Edges for routing scenarios between management subnets and desktop pools also NSX LB can be used we there are multiple Connection Servers and UAG instances .For hardware VTEP ensure that switch is supported and please note that we cannot use DLR in that case
There is a good doc which covers few design scenarios as well- >VMware® NSX for vSphere End-User Computing Design Guide 1.2
Can check this doc for quick reference even though it covers very basic design scenarios for VXRAIL-NSX design.
Thanks for that. I would prefer as you say to have separate NICs for NSX traffic. The plan was to place a NSX LB in-line of the UAGs and I have been trying to align with the EUC Design Guide where possible.
Now for the stupid question: Why does the use of a hardware VTEP prevent the use of a DLR?