VMware Cloud Community
timeshok
Contributor
Contributor

VReplicator Design Network

Hello everyone,

we have a new infrastructure configured in this way:

Prod:

  • Esxi 6.7U2 - VCenter 6.7 U2- VReplicator 8.2 - SRM 8.2

DR:

  • Esxi 6.5U2 - VCenter 6.5 U2- VReplicator 8.2 - SRM 8.2 (we use 6.5 version because it's the last released supported with the Storage)

On network side, we have the following VLANs:

  • Prod
    • VLAN A - contains ESXi, VCenter, VReplicator and SRM (all in the same vlan to limit traffic through the FW)
    • VLAN B - vMotion
    • VLANs for VMs
    • for each hosts we have 2 FC 10Gb NICs
      • only 1 vswitch with all traffic (vmk0 for Management and vmk1 for vMotion)
  • DR
    • VLAN C - contains ESXi, VCenter, VReplicator and SRM (all in the same vlan to limit traffic through the FW)
    • VLAN D - vMotion
    • VLANs for VMs
    • for each hosts we have 6 Eth 1Gb NICs
      • vswitch0 for Management and vMotion (2NICs)
      • vswitch1 for VMs traffic (4NICs)

VLAN A communicates with VLAN C through a VPN S2S dedicated for DR replica. In this VPN will run replications from Vreplicator Source to Vreplicator DR.

For the VReplicator, default provide one single network interface for Management and replica. In this case we must create an additional vmk for the VReplicator and Replicator NFC or we can leave this configuration?

Globally the numbers of VMs under protection will be 30-50 maximum.

Thanks for the support

0 Kudos
5 Replies
scott28tt
VMware Employee
VMware Employee

By "vReplicator" do you mean vSphere Replication or something else?


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
timeshok
Contributor
Contributor

yes, I mean vSphere Replication

0 Kudos
timeshok
Contributor
Contributor

Hello,

I add some additional info.

  • the number of VMs under replica and protection will be around 40-50 VMs
  • all VMs used around 20-25 TB
  • in the VLAN A (in replica with the VLAN C through VPN) we have also replica from systems backup. We had choise to put backup server in the same VLAN to avoid traffic through FW
0 Kudos
MJMSRI
Enthusiast
Enthusiast

Hi for your infrastructure its best practice to run the same version of vSphere at both sites as if failing over from 6.7 to 6.5 the VMware tools will be different and then when failing back the VMs will go from 6.5 to 6.7 so best to have 6.7 at both sites. I see the comparability with the storage is why its 6.5, so best to replace that storage so your on the same at both sites. if thats not an option then at least test a VM failover from 6.7 to 6.5 and then failback to make sure there are no issues.

For replication it looks like you are aiming for Traffic Isolation for the replication. If so then create a new vss/vds and port group for replication, then create a vmkernel port for replication service on each host / vds then on each Replication appliance add an additional interface so then each appliance will have 2 interfaces, one for management and one for replication. in the replication interface set a static ip via the vami for replication appliance (:5480 login) then once thats set go back to the configuration page and set the same IP Address in the "IP Address for Incoming Storage Traffic" that will then set the segregation.

0 Kudos
ashilkrishnan
VMware Employee
VMware Employee

Hi

By default, management NIC on source ESXi host and management NIC of target VR appliance is used to send and receive VR NFC traffic. If you wish to segregate vSphere replication traffic to isolate the traffic, yes you can do that.

If you wish to segregate vSphere replication traffic, yes you can do that.  Adding to what user "MJMSRI" had mentioned below, I would request you to refer following document for instructions:

Isolating the Network Traffic of vSphere Replication

Hope that helps

0 Kudos