We are installing NSX into our Production environment. Consists of two linked vCenters and 200 or so hosts. The question I have to you guys is when you did this in a Brownfield type deployment, did most of you use your existing VDS switch, or create a new one? The hosts we are POCing NSX in Prod on are new, so there would be no problem making a new VDS switch. My concern is when we get to the point when we want to NSX-enable the other clusters, they will still be using the old non-NSX enabled VDS.
I am having a hard time thinking thru this alone. What did you guys do? What are your recommendations? Any thing to consider or any gotchas?
Depends on how the current vDS and uplinks being setup.
If you are going to use VXLAN (logical switches, logical routers), NSX will create VMkernel PortGroups for VTEPs and PortGroups for the logical switches.
Are the PortGroups for VM networks on the vDS?
Normally the same vDS for VM networks is being used for NSX as well.
The reason for that is in case you want to do L2 bridging i.e. for migration, the VLAN-backed PortGroup that you want to bridge must be on the same vDS with the VXLAN.
See this doc: L2 Bridges
VXLAN (VNI) network and VLAN-backed port groups must be on the same distributed virtual switch (VDS).
Another consideration, make sure the vDS uplink number matches the actual number of uplinks, by default vDS created four uplinks.
Example: Working with a vSphere Distributed Switch
By default, four uplinks are created. Adjust the number of uplinks to reflect your VDS design. The number of uplinks required is normally equal to the number of physical NICs you allocate to the VDS.
Related VMware KB: NSX for vSphere 6.x VTEP and vDS Uplink dependencies (2149826)
If you want to see a sample of public vDS design on NSX, see this doc: Dell EMC VxBlock™ Systems for VMware NSX 6.3 Architecture Overview
This is really good information! Thank you!
One problem we have is on our big Prod VDS we have 2 Uplinks. However some hosts have 2, other hosts (blades specifically) have 4 uplinks. Is there a way to work around this?
**Correction we have it set to 4 Uplinks.
NSX host preparation i.e. VXLAN configuration is per cluster so you can have different vDS for each clusters if there are some configuration differences between the clusters.
You mentioned the blades have 4 uplinks for its vDS, are all 4 uplinks being used for a common vDS and common PortGroup?
For example you vDS-AB with 4 uplinks with 4 vmnics, vmnic0-vmnic4?
If you have portgroupA on vmnic0 & vmnic3 only and portgroupB on vmnic1 & vmnic4 and you are planning to use this vDS for NSX VXLAN,
you should split the vDS into two: vDS-A on vmnic0 & vmnic3 only and vDS-B on vmnic1 & vmnic4
If you have any diagram to share that would be great to make sure my assumption is correct