VMware Networking Community
jvm2016
Hot Shot
Hot Shot
Jump to solution

Number of vDS in NSX deployment_NSX

Hi all ,

could someone suggest on the following .

consider we have three clusters in DC .namely comupte1,compute2 and mangement.

normally i have seen we configure separate vDS for each cluster .

since vDS is created at DC level what advantages or disadvantages we will face if we configure only on vDS spanning all clusters  and single transport zone  covering all clustersto make configuration simpler.

referring to reference diagram in on page 20 of https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/nsx_64_install.pdf .

since uplink portgroup is trunk and carry traffic of all vlans why not we configure only single vDS ?? 

apreciate your response on this .

Reply
0 Kudos
1 Solution

Accepted Solutions
Sreec
VMware Employee
VMware Employee
Jump to solution

.1) you asked about edge is it not recommended to put egde  (i believe you want to ask about ESGs and DLR control vm) with management cluster.so in NSX language edge is collapsed with management .

Yes, it is better to have a dedicated cluster for Edges , but collapsing with management cluster  is also fine as long as you are ok with preparing the cluster with NSX. I have seen both the designs and it has got its own advantage and disadvantage. One classic example would be multiple host failures in management cluster will impact Edge placement even in HA mode (Also depends on vSphere design) , ideally people use 4 host maximum . Going via that approach management plane and data traffic is impacted which defeats overall NSX architecture.  . Most importantly  NSX licensing benefit as well - if management cluster is running only management components (No edges/dlr) and no DFW.

2:also for the time being iam considering greenfield deployment of small DC in a single chasisof blades (for example 16 Blades in a single chasis) just to avoid different vendors to make it simple.

This is perfect way to start, same model servers always a good candidate in any design.

3:so referring to the point of having two vds and configuring the folowing port groups as per reference guide  . however i still think that these ports groups can be still configured on single vDSspanning accross both workload cluster and mangement cluster .and it should not impact operations  like vDS upgrade (as it is non disruptive operation i belive ,host upgrade and nic upgradeby putting corresponding host in maintencaemode and using mangment network corrsponding to that .

You can always make upgrade seamless if underlying vSphere design is done perfectly and you are totally right there is no data plane level impact but operational level impact is there for DVS upgrade. In some cases  management cluster is also used for hosting other VMware stack like VRA/VCD/SRM etc and customer might have a dedicated workload cluster for running such workloads apart from NSX workload cluster.For sure provisioning of VM's  will be impacted if they are leveraging same DVS as the end user might not have a direct visibility on underlying layer. So think ahead and design the infra accordingly.

4)as far as vmotion boundary is concernerd we are considering common shared storage to both clusters and vmkernel in same subnet and with logical switch creation vm can move form compute

to mangement or vice versa if needed without loosing connectivity.(advantage of nsx logical switching)

Yes, you can certainly migrate the VM's back and forth . vMotion kernel can be  in different L2 subnets as well for each cluster (Based on physical design)

so all i want to know again is  configuring two vDS like below is a neccesity or best practice ?

I can say it is both Smiley Happy  based on use case and design .  Total isolation for management cluster(No Edges) from Workload cluster is what I need in first place ,so for such use cases it start all the way from Unique Cluster-DVS-HOST etc .

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered

View solution in original post

Reply
0 Kudos
4 Replies
Sreec
VMware Employee
VMware Employee
Jump to solution

DVS design and deployment is key part of NSX Design. There are multiple factors usually contribute to DVS design , Server Model(Rack/Blade) , Edge and VXLAN traffic and for simplicity reason on DVS per cluster is preferred. In your example we have two compute and one management , how about Edge deployment  ? Edge is collapsed with Compute or with management ? If you no plans to prepare management cluster with NSX , ideally I prefer a unique DVS for Management so that Host Upgrade, DVS upgrade, NIC driver upgrade  etc down-the line can be planned and done without impacting Workload Cluster and most importantly VMotion boundary is defined for management cluster. So from NSX perspective it would be under a unique DVS and Transport Zone(Only if it is NSX prepared). For workloads cluster you can have a single DVS spanning across clusters ,but if your workloads clusters are running with different server model ( For eg: HP Blades and UCS Blades) , it is better to have unique DVS so that NIC alignment can be done properly  and it looks a neat and clean configuration and most importantly  to have a consistent NIC teaming configuration for VXLAN and Edge traffic based on Cluster Design.Uplink connectivity carrying VXLAN traffic must be constant for all host and teaming policy should be same.  Consider your workload cluster is UCS  Blades so there is no way you can configure LACP teaming mode, preferred way is route based on original port id.  In a nutshell having multiple DVS for Mgmt and Edge allow flexibility for NIC teaming policies and choice of teaming should be done based on bandwidth requirement as well. Let me know if you have any queries.

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
jvm2016
Hot Shot
Hot Shot
Jump to solution

thnaks for your response  .

I do have few points to discuss .

1.you asked about edge is it not recommended to put egde  (i believe you want to ask about ESGs and DLR control vm) with management cluster.so in NSX language edge is collapsed with management .

2:also for the time being iam considering greenfield deployment of small DC in a single chasisof blades (for example 16 Blades in a single chasis) just to avoid different vendors to make it simple.

3:so referring to the point of having two vds and configuring the folowing port groups as per reference guide  . however i still think that these ports groups can be still configured on single vDS

spanning accross both workload cluster and mangement cluster .and it should not impact operations  like vDS upgrade (as it is non disruptive operation i belive ,host upgrade and nic upgrade

by putting corresponding host in maintencaemode and using mangment network corrsponding to that .

as far as vmotion boundary is concernerd we are considering common shared storage to both clusters and vmkernel in same subnet and with logical switch creation vm can move form compute

to mangement or vice versa if needed without loosing connectivity.(advantage of nsx logical switching)

so all i want to know again is  configuring two vDS like below is a neccesity or best practice ?

please correct me if iam missing something very fundamental in above statements.

pastedImage_0.png

Reply
0 Kudos
Sreec
VMware Employee
VMware Employee
Jump to solution

.1) you asked about edge is it not recommended to put egde  (i believe you want to ask about ESGs and DLR control vm) with management cluster.so in NSX language edge is collapsed with management .

Yes, it is better to have a dedicated cluster for Edges , but collapsing with management cluster  is also fine as long as you are ok with preparing the cluster with NSX. I have seen both the designs and it has got its own advantage and disadvantage. One classic example would be multiple host failures in management cluster will impact Edge placement even in HA mode (Also depends on vSphere design) , ideally people use 4 host maximum . Going via that approach management plane and data traffic is impacted which defeats overall NSX architecture.  . Most importantly  NSX licensing benefit as well - if management cluster is running only management components (No edges/dlr) and no DFW.

2:also for the time being iam considering greenfield deployment of small DC in a single chasisof blades (for example 16 Blades in a single chasis) just to avoid different vendors to make it simple.

This is perfect way to start, same model servers always a good candidate in any design.

3:so referring to the point of having two vds and configuring the folowing port groups as per reference guide  . however i still think that these ports groups can be still configured on single vDSspanning accross both workload cluster and mangement cluster .and it should not impact operations  like vDS upgrade (as it is non disruptive operation i belive ,host upgrade and nic upgradeby putting corresponding host in maintencaemode and using mangment network corrsponding to that .

You can always make upgrade seamless if underlying vSphere design is done perfectly and you are totally right there is no data plane level impact but operational level impact is there for DVS upgrade. In some cases  management cluster is also used for hosting other VMware stack like VRA/VCD/SRM etc and customer might have a dedicated workload cluster for running such workloads apart from NSX workload cluster.For sure provisioning of VM's  will be impacted if they are leveraging same DVS as the end user might not have a direct visibility on underlying layer. So think ahead and design the infra accordingly.

4)as far as vmotion boundary is concernerd we are considering common shared storage to both clusters and vmkernel in same subnet and with logical switch creation vm can move form compute

to mangement or vice versa if needed without loosing connectivity.(advantage of nsx logical switching)

Yes, you can certainly migrate the VM's back and forth . vMotion kernel can be  in different L2 subnets as well for each cluster (Based on physical design)

so all i want to know again is  configuring two vDS like below is a neccesity or best practice ?

I can say it is both Smiley Happy  based on use case and design .  Total isolation for management cluster(No Edges) from Workload cluster is what I need in first place ,so for such use cases it start all the way from Unique Cluster-DVS-HOST etc .

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
tanurkov
Enthusiast
Enthusiast
Jump to solution

Best practice is to use 3 type of clusters

  1. Managment
  2. Edge
  3. Compute

The best way is to prepare for VXLAN only Compute and in some cases Edge.

Just follow it.

Using one DVS is not a problem but you will need to prepare whole DVS for NSX which include and an scenario to look down NSX manager and Managment for NSX components.

So use 3 separate DVS and prepare only one Compute maximum  2 with Edge

Regards Dmitri