VMware Networking Community
jlargaespada
Contributor
Contributor

Is possible to put the vCenter to VXLAN Backend port group?

Hi my name is Jorge L.

I want to use a small deployment NSX with Active/Active Stretch Cluster, but I am confused with the deployment, because on all the papers I saw that we use regular dvPort for MGMT Cluster , VXLAN PG for compute cluster and  regular dvPort for Edge cluster. I want to stretch the cluster with VXLAN only, but when site 1 died I need to access the vCenter is possible to put vCenter to VXLAN Backend port? are some caveats o restrictions or advice with that deployment.

Also I saw the NET1974 but don't say nothing about small deployment.

DIAGRAMA SOLUCION NSX.png

Reply
0 Kudos
5 Replies
cnrz
Expert
Expert

In general if there are more than 1 DCs not on the same location, Cross-VC NSX or L2VPN may be better way to use. This is due to possibilities of strict bandwidth, latency(distance)  requirements as well as if there is site interconnectivity problems may yield some management as well as connection problems, this requires redundancy and availability between sites.

Also design of VMSC needs evaluated needs to be evaluated with Vsphere and Storage Replication best practices. Since NSX design relies on Vsphere, Infrastructure Vsphere design according to NSX best practices may be important.

Below are some links that compares advantages and disadvantages  of using NSX in a Stretched Cluster environment according to different design requirements on Vsphere.

https://cloudsolutions.vmware.com/assets/blt1c82c2edb91424b6/Multi-DC-pooling-with-NSX.pdf

NSX with vSphere Metro Storage Cluster (vMSC) The use case for this design includes data centers that are close together within a metropolitan or campus area. This multi data center pooling solution has a 10ms RTT latency for storage, or 5ms RTT when using vSAN. In this configuration, there is only one vCenter Server. The cluster(s) are stretched across the sites and share the same (synchronously replicated) datastore, which requires the low storage

Since NSX Manager needs to be placed in a Vlan Bacekd Portgroup and Vcenter needs to be connected to NSX Manager for Management through Vsphere Web Client, Putting Vcenter in a Vxlan backed portgroup may create problems if there is a problem on the Data Plane which may lose Vcenter managament.

NSX  Adv vs Enterprise design question?

At least that would be the reason if I would like push customers for multi-vcenters esp if there is no compelling reason to do vMSC.

When you have 2 vCenter servers and NSX Managers, it is also not mandatory to connect them with cross-VC.

But of course it depends on your design factors, design requirements etc.

That would also bring us to my points in previous reply, how would you perform vCenter or NSX Manager recovery when you do vMSC.

Will you do vMSC on management cluster too?

I don't think we can change IP address of vCenter & NSX Manager if you are going to failover to the other site.

That normally means you would need to have stretch L2 network for management which some network architect would like to avoid.

NSX with Separate Clusters vs Cross-VC NSX

At the moment you only have one VC in DataCenterA so you might want to think first whether you need a separate vCenter in DataCenter B or not.

The good thing with separate VC for DataCenterB is that you can access and manage DataCenterB if DataCenter A is unavailable.

But the drawback is you can't share same NSX objects (security groups, security policy/firewall rules, logical switch) unless you use Cross-VC NSX.

Especially with the DFW, for example if you have some VMs in DataCenterA communicating to VMs in DataCenterB you cannot use NSX dynamic security objects on your rules.

https://nielshagoort.com/2016/04/19/stretched-cluster-with-nsx/

Looking at the placement of the NSX components, you will see that the NSX Manager appliance and the NSX controllers are all placed in datacenter 1 following the stretched cluster approach with 1 vCenter instance. In this example scenario, we assume the management tooling is also placed on a stretched cluster.

stretchedcluster-nsx.png

I did wonder if having all NSX management/control components on one site is a wanted scenario. Normally I would go for an, as equal as possible, distribution of these components over both sites, but digging more into how the control plane of NSX works, it really does not add any value to do so. It makes more sense the place them on the same site to avoid unnecessary traffic between the controllers over the datacenter interconnect. Another reason to place them together is to avoid any unwanted elections among the controllers if a datacenter partition failure would occur.

https://www.vmguru.com/2016/08/please-stop-stretching-vlans-virtualize-your-network/

Network admins hate stretching VLANs across data centers, we absolutely hate it. It causes potential instability on a inter-data center scope, destroys our isolated fault domains; something happens with VLAN X on site A, it also can take down site B (unless you take special precautions). I spent a few hours last week and the week before to help out customers that had that exact issue, which triggered this post.

The entire idea of stretching VLANs between data centers is about virtual machine mobility. You can do a failover between sites and don’t have to make adjustments to your applications (IP address changes and IP references). Most of the time VLAN stretching comes from the business RTO requirement and the fact that most (traditional) applications can’t execute a failover on the application layer, without changing their make-up.

Cross-VC NSX Design Guide Compares different Inter-DC DR and Business Continuity Scenarios including Stretched Clusters, L2 VPN and Cross-VC NSX.

NSX-V Multi-site Options and Cross-VC NSX Design Guide

Active-Active with NSX and Stretched vSphere Clusters (vSphere Metro Storage

Cluster)

In this solution vSphere clusters are stretched across sites. Anytime vSphere clusters are

stretched across sites, it’s known as a specific configuration called vSphere Metro Storage

Cluster (vMSC). A vMSC deployment also requires a stretched storage solution such as EMC

VPLEX. Since the vSphere clusters are stretched across sites, the same datastore must be present

and accessible at both sites.

A vMSC does not require NSX, but by deploying NSX with vMSC, one can get all the benefits

of logical networking and security such as flexibility, automation, consistent networking across

sites without spanning physical VLANs, and micro-segmentation. The connectivity between data

centers can be completely routed while leveraging VXLAN for L2 extension via L2 over L3.

vsmc_topology.png

NSX_Vmsc_Vxlan.png

http://www.routetocloud.com/2016/02/nsx-dual-activeactive-datacenters-bcdr/

Reply
0 Kudos
jlargaespada
Contributor
Contributor

Hi Canero, thanks so much for the reply.

I already have vMSC (OTV + VPLEX) we want to change to vMSC (VXLAN + VPLEX), The customer want the HA/DRS Feature so that is why we  only have one vCenter and one Cluster. So the recommendation is not put VXLAN Backed PG for MGMT right? if we need to recover the management plane (vCenter ) we need to use the classic approach Stretch L2 for MGMT and VXLAN for Compute cluster.

So in NET1974 the scenario for Management Cluster the stretch are L2 with OTV and/or Dark Fiber, for Edge Cluster Active/Passive ESG and for Compute Cluster VXLAN, I am right?

2018-01-11_09-47-42 (2).png

Reply
0 Kudos
cnrz
Expert
Expert

NSX Manager and Controller also require a Vlan based Portgroup, son in general for small deployments where a seperate edge or management cluster is not possible or required, then Management, Edge and Compute clusters may be consolidated into one Cluster (there are some recommendations like resource reservation for this type of deployment)

If one Venter and one Cluster is a requirement due to HA/DRS (SRM may be an option for Multisite DR or BC), then only one NSX Manager and one Controller Cluster may be deployed. In that case Cross-Vcenter NSX may not be possible although it brings advantages with features like CDO Mode, and less dependency on the underlying Network latency, bandwidth and availability requirements, easier site failure recovery and local egress-ingress with dynamic routing.

In that solution  NSX Manager and Controllers normally reside on Site1 (and specific configuration needs to be configured to prevent controllers exist in seperate sites, and when there is Site1 failure HA/DRS is used  for Recovering the NSX Manager and Controllers on Site2. For a Complete Active/Standby DR Scenario edges may be recovered with same IP configurations on Site2 in case of Site1 failure. (Again for Edges configuration may be needed to prevent dynamically vmotioning to Site2 if WAN or Internet connections are on Site1 only).

For compute Workloads, under normal operation they may  coexist on both Site1 and Site2 connected to a global  Logical Switch, (which is not universal since only single Vcenter and NSX Manager) and global DLR. Since Vxlan provides VM Mobility above L3 Networks OTV is not needed since there is no L2 Network requirements. Since there is already a Metro-Cluster storage solution this may be possible. For Vxlan deployment MTU should not be problem, if OTV is used then underlying network infrastructure support 1600 MTU.

For the Management and Edge Clusters, OTV is again not needed since although Vlan based,  Cluster VMs reside on a Single DC at all times.

NET1974 is 2014, so more current best practices  for Multi Site NET1190BU and NET1191BU for 2017.

NET1190BU:

https://www.youtube.com/watch?v=P8wWAG5w7Hs&index=12&list=PLBMoYohMQ37d2TcBGMoO49K5GdnMNHumT

NET1191BU:

https://www.youtube.com/watch?v=h_RP3YBvqyI&index=13&list=PLBMoYohMQ37d2TcBGMoO49K5GdnMNHumT

Reply
0 Kudos
parmarr
VMware Employee
VMware Employee

You might want do try this:

1. After downloading the latest version of Player: 1. Make the .bundle file executable with the command: sudo chmod a+x VMware-Player-14.0.0-6661328.x86_64.bundle

2. Run the installer with this command: sudo ./VMware-Player-14.0.0-6661328.x86_64.bundle.

Does this make a difference?

Sincerely, Rahul Parmar VMware Support Moderator
Reply
0 Kudos
Mid_Hudson_IT
Contributor
Contributor

This was a very insightful posting. Smiley Happy

VCP5/6-DCV, VCP6-NV, vExpert 2015/2016/2017, A+, Net+, Sec +, Storage+, CCENT, ICM NSX 6.2, 70-410, 70-411
Reply
0 Kudos