What does the deployment look like? Cross-vCenter NSX with one Primary and one Secondary NSX Manager or two Standalone NSX Managers?
It will be cross-vcenter with one primary and one secondary nsx manager.
One point to note may be Enhanced Linked mode is not necessary, but it may provide easy Management with single Vcenter view
Considerations may be as following:
- The source and destination vCenter Server instances and ESXi hosts must be running version 6.0 or later.
- The cross vCenter Server and long distance vMotion features require an Enterprise Plus license. For more information, see Compare vSphere Editions.
- When using the vSphere Web Client, both vCenter Server instances must (soma documents point as not a must) be in Enhanced Linked Mode and must be in the same vCenter Single Sign-On domain so that the source vCenter Server can authenticate to the destination vCenter Server.
- Both vCenter Server instances must be time-synchronized with each other for correct vCenter Single Sign-On token verification.
- For migration of compute resources only, both vCenter Server instances must be connected to the shared virtual machine storage.
- When using the vSphere APIs/SDK, both vCenter Server instances may exist in separate vSphere Single Sign-On domains. Additional parameters are required when performing a non-federated cross vCenter Server vMotion. For more information, see the VirtualMachineRelocateSpec section in the vSphere Management SDK Guide.
- Maximum Total Sites - 8
- Maximum delay between 2 sites 200ms.
- Local Egress and Ingress may be important, Global load balancer such as F5 GTM may be needed or dynamic routing protocols may be tuned to provide this feature.
- Procedures for handling Site Failure and restore may be important. Automation may be helpful
- For Active Active Designs, If Storage vmotion is not par of design, storage clusters across sites may have important role (these technologies (like Vplex Metro, Vplex Geo) have strict delay requirements around 10ms, so this may be important about the distance between 2 sites)
Thanks for reply. I was requesting NSX Deployment Assumptions but the reply looks like vCenter Server and ESXi Assumptions.
NSX can be tuned according to the Business Continuity or DR design. This document goes into the detail of each scenario, but basically the architecture points to consider (according to application view). Various options exist according to the design:
Disaster Recovery Scenarios Disaster Recovery solution with NSX is designed and tested to support the following failure and recovery conditions:
• Partial Application Failover – Only a part of application failed over from Protected to Recovery site. The application components on the Protected and Recovery site continue to function and communicate as before.
• Full Application Failover – Entire application failed over from the Protected site to the Recovery site .
• Site Failure – The entire site has failed including NSX components at the protected site. The application and NSX components are recovered at the recovery site.
Page 29 lists some additional Design Considerations about NSX:: (Additional design considerations that should be kept in mind for a DR deployment with Cross-VC NSX.)
- • Some of the caveats associated with Cross-VC NSX impact the Disaster Recovery solution outlined above (Please refer to NSX 6.2 documentation)
- • To support Cross-VC NSX and resulting VXLAN traffic between the two sites, a MTU size of 1600 is required for this solution
- • The maximum latency between the sites must be under 150ms RTT
- • The Universal Logical Switches cannot be bridged to physical workloads either using NSX bridging or third party VTEPs
- • Universal DFW firewall rules can only use MAC Sets, IP Sets, and Security Groups containing MAC/IP Sets (Security Tags, VM Names etc. cannot be used)
- • Third Party/Partner Services insertion is NOT supported for Universal Objects
- • End point security services from the partners is NOT supported on Universal Logical Entities
- The route advertisement and GSLB based approach to control Ingress traffic are not mutually exclusive; the approaches will likely co-exist in most designs. Some North/South traffic will be protected via GSLB and the rest with route advertisements
- • The maximum granularity of Locale-ID is on a per host basis (can be per Cluster as well); in a scenario where multiple applications are failing over on the same host (or Cluster if the Locale-ID is assigned at the Cluster level), all the applications MUST share a “locale” where all the N-S traffic will egress (this could be either Recovery or Protected site ESG, but cannot be both for the applications sharing host or a cluster with a single Locale-ID assignment)