.1) you asked about edge is it not recommended to put egde (i believe you want to ask about ESGs and DLR control vm) with management cluster.so in NSX language edge is collapsed with management .
Yes, it is better to have a dedicated cluster for Edges , but collapsing with management cluster is also fine as long as you are ok with preparing the cluster with NSX. I have seen both the designs and it has got its own advantage and disadvantage. One classic example would be multiple host failures in management cluster will impact Edge placement even in HA mode (Also depends on vSphere design) , ideally people use 4 host maximum . Going via that approach management plane and data traffic is impacted which defeats overall NSX architecture. . Most importantly NSX licensing benefit as well - if management cluster is running only management components (No edges/dlr) and no DFW.
2:also for the time being iam considering greenfield deployment of small DC in a single chasisof blades (for example 16 Blades in a single chasis) just to avoid different vendors to make it simple.
This is perfect way to start, same model servers always a good candidate in any design.
3:so referring to the point of having two vds and configuring the folowing port groups as per reference guide . however i still think that these ports groups can be still configured on single vDSspanning accross both workload cluster and mangement cluster .and it should not impact operations like vDS upgrade (as it is non disruptive operation i belive ,host upgrade and nic upgradeby putting corresponding host in maintencaemode and using mangment network corrsponding to that .
You can always make upgrade seamless if underlying vSphere design is done perfectly and you are totally right there is no data plane level impact but operational level impact is there for DVS upgrade. In some cases management cluster is also used for hosting other VMware stack like VRA/VCD/SRM etc and customer might have a dedicated workload cluster for running such workloads apart from NSX workload cluster.For sure provisioning of VM's will be impacted if they are leveraging same DVS as the end user might not have a direct visibility on underlying layer. So think ahead and design the infra accordingly.
4)as far as vmotion boundary is concernerd we are considering common shared storage to both clusters and vmkernel in same subnet and with logical switch creation vm can move form compute
to mangement or vice versa if needed without loosing connectivity.(advantage of nsx logical switching)
Yes, you can certainly migrate the VM's back and forth . vMotion kernel can be in different L2 subnets as well for each cluster (Based on physical design)
so all i want to know again is configuring two vDS like below is a neccesity or best practice ?
I can say it is both :smileyhappy: based on use case and design . Total isolation for management cluster(No Edges) from Workload cluster is what I need in first place ,so for such use cases it start all the way from Unique Cluster-DVS-HOST etc .