fifthman_roshan
Enthusiast
Enthusiast

NSX-T 3.0 Design Suggestion

Hello Experts, Here is the point of discussion.

Customer wants to segregate the NS and EW traffic in NSX-T env and had enough 10 gig pnics available. We had proposed following.

2 for - VDS (mgmnt,vmotion)

2 for - NVDS1 - for workload VM's

2 for - NVDS2 - for DMZ workload ....

But in this design, Edge VM will sit on Host NVDS and will consume 2 nics for overlay as well as N-S.

If we design it as per the customer requirement, its like going back to old 2.4 edge VM design where Edge had multiple nvds and was possible to assign individual pnics for e-w and n-s using named teaming policy. This design is still possible in 2.5 & 3.0, however pnics assigned to N-S traffic on the host where there is no edge VM, will be ideal until an edge vm moves to that host..? so what is the point in keeping pnics unused..?

Comments please.

Tags (1)
0 Kudos
2 Replies
Sreec
VMware Employee
VMware Employee

If the version is 3.0 with vSphere 7.0 ,VDS 7.0 . You can follow below approach

2 for - VDS01 (mgmnt,vmotion)

2 for - VDS02 - for workload VM's with Edges

2 for - VDS03 - for DMZ workload with DMZ edges

As far traffic segregation , above design is not enough .They would need unique uplink connectivity to unique TOR L3 and FW, sometimes people collapse the physical layer (DMZ&DC)  with micro segmentation on DMZ workloads which is also fine.  On a side note, uplink utilization of NIC totally depends on DRS nature of movement that will kick Edges or VM to different host, something which we ideally don't control it ( exceptions are DRS rules) , so i'm unsure why we should worry about this point. If you have equal number of servers and tenants with Edges with A-A and A-S config , for sure utilization will there on all PNICS for all servers.

Cheers,
Sree | CKA|CKAD|VCIX-3X| VCAP-4X| VExpert 5x
Please KUDO helpful posts and mark the thread as solved if answered
fifthman_roshan
Enthusiast
Enthusiast

They would need unique uplink connectivity to unique TOR L3 and FW >>>> Yes, DMZ nics are connecting to FW and Workload connecting to core switch.

What are your thoughts on going back to 2.4 design, where edge has multiple nvds VS 2.5 and later where edge has single nvds for overlay and n-s.

Is it still feasible to design that way in 3.0. Or do you think, we should push customer for single nvd-s.

Another thing that to be consider is, esxi is on 6.7 U3.

0 Kudos