VMware Cloud Community
VirtuallyMikeB
Jump to solution

vSphere Replication 6 VMkernel ports

Thanks to Jeff Hunter for his recent updates and documentation about vSphere Replication 6.0.  Reading the online docs, I have a few questions about the newly supported, dedicated vSphere Replication VMkernel prots.

Here (vSphere Replication 6.0 Documentation Center) and here (vSphere Replication 6.0 Documentation Center) are notes on configuring dedicated VMkernel ports for VR traffic on a source host and VR traffic on a target host (one for VR traffic and another for VR NFC traffic, respectively).

Considering that it's probably a common practice to use VR as the replication engine with SRM with the intention of failing back to the original production site, what's the value in configuring two VMkernel ports for VR?

At the Protected Site, you configure a VR VMkernel port to send traffic.  It sends replicated VM data to the Recovery Site's VR appliance, which turns and sends the replicated data to the Recovery Site's ESXi hosts VR NFC VMkernel ports.

In order to fail back, then, the Recovery Site can (should?) have an additional VR VMkernel port, which sends replicated VM data to the original Protected Site's VR appliance, which in turn sends the replicated data to the original Protected Site's ESXi host's VR NFS VMkernel ports.

This looks like there can or should be a distinction drawn between VR traffic between sites and VR NFC traffic within a site since there are two VMkernel traffic types (VR and VR NFC).

What is this distinction that warrants a dedicated VR NFC VMkernel port? Why not just use VR VMkernel port? Thanks!

Edit: I would consider these types of traffic to be of the same importance and security level.  I would have no issue putting both VMkernel ports in the same VLAN.  If I did this, this would put two VMkernel ports, per host, in the same network segment.  I'm wondering why I would want to do this rather than just use a single VMkernel port or multiple VLANs.

Message was edited by: Mike Brown

----------------------------------------- Please consider marking this answer "correct" or "helpful" if you found it useful (you'll get points too). Mike Brown VMware, Cisco Data Center, and NetApp dude Sr. Systems Engineer michael.b.brown3@gmail.com Twitter: @VirtuallyMikeB Blog: http://VirtuallyMikeBrown.com LinkedIn: http://LinkedIn.com/in/michaelbbrown
0 Kudos
1 Solution

Accepted Solutions
Smoggy
VMware Employee
VMware Employee
Jump to solution

i think this basically boils down to options. you don't have to do this but based on feedback it was felt that we had enough requests from customers to provide a mechanism that not only allows you to control the path the outbound (and inbound) replication traffic (from source hosts and to target VR appliances) and routes this takes across the network but to also control the adapter used for the VR NFC traffic at the target sites. As you know VR leverages NFC to push the data down to the target datastores at the target sites and some customers wanted to be able to separate that traffic flow as well.

So in the case of NFC you could if you wanted (optional) set things up so that traffic to the storage hosts (and by that i mean those hosts VR has determined have access to the target datastores) can be sent out a separate physical LAN if you wanted that...and lots of people asked for that flexibility. Gives customers the ability to isolate frequent VR NFC (and VR hostd traffic) from "regular" non-VR management traffic.

Once the VRMS notices that a host has a vmknic flagged as VR NFC then only that address is reported to the VR server meaning when we talk to that host from now on we will only use that address for the VR NFC traffic.

just my 2cents on why we did this.

View solution in original post

0 Kudos
1 Reply
Smoggy
VMware Employee
VMware Employee
Jump to solution

i think this basically boils down to options. you don't have to do this but based on feedback it was felt that we had enough requests from customers to provide a mechanism that not only allows you to control the path the outbound (and inbound) replication traffic (from source hosts and to target VR appliances) and routes this takes across the network but to also control the adapter used for the VR NFC traffic at the target sites. As you know VR leverages NFC to push the data down to the target datastores at the target sites and some customers wanted to be able to separate that traffic flow as well.

So in the case of NFC you could if you wanted (optional) set things up so that traffic to the storage hosts (and by that i mean those hosts VR has determined have access to the target datastores) can be sent out a separate physical LAN if you wanted that...and lots of people asked for that flexibility. Gives customers the ability to isolate frequent VR NFC (and VR hostd traffic) from "regular" non-VR management traffic.

Once the VRMS notices that a host has a vmknic flagged as VR NFC then only that address is reported to the VR server meaning when we talk to that host from now on we will only use that address for the VR NFC traffic.

just my 2cents on why we did this.

0 Kudos