vbrowncoat's Accepted Solutions

You would need to unprotect the VM in the first PG and then reprotect it in the new one. You would need to reconfigure the start order for that VM as well. To avoid this you could look at usin... See more...
You would need to unprotect the VM in the first PG and then reprotect it in the new one. You would need to reconfigure the start order for that VM as well. To avoid this you could look at using array consistency groups (multiple datastores that are replicated together). If you were using these that would allow you to svMotion VMs between datastores that are part of the consistency group without disrupting SRM.
Having 2 SRM instances managing/seeing the same LUNS won't be an issue so you don't need to worry about creating new ones. The only time you'd have a problem with that is if you had to run a fail... See more...
Having 2 SRM instances managing/seeing the same LUNS won't be an issue so you don't need to worry about creating new ones. The only time you'd have a problem with that is if you had to run a failover as that would confuse the remaining SRM pair. You can use the same placeholder datastores, though I wouldn't recommend doing that at the same time. Remember placeholder datastores can be very small and shouldn't be replicated.
Protection groups are a group of VMs you want to recover together. Restart order (defined through priority groups and dependencies) is set at the recovery plan level. See this blog post for de... See more...
Protection groups are a group of VMs you want to recover together. Restart order (defined through priority groups and dependencies) is set at the recovery plan level. See this blog post for details on protection groups: SRM Protection Group Design Considerations - VMware vSphere Blog
If you use vSphere Replication Protection Groups then yes. If you are using array-based replication, no.
Other than the IP customization being documented in SRM (including the IP customizer tool if used), VMware doesn't provide any other tools/scripts. You could write some kind of script and run... See more...
Other than the IP customization being documented in SRM (including the IP customizer tool if used), VMware doesn't provide any other tools/scripts. You could write some kind of script and run it on the VM. SRM supports running scripts as part of a recovery plan. You could also just have the script run at power on.
The requirements for cross-vCenter vmotion are here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2106952 Note that they say that "When using ... See more...
The requirements for cross-vCenter vmotion are here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2106952 Note that they say that "When using the vSphere Web Client, both vCenter Server instances must be in Enhanced Linked Mode and must be in the same vCenter Single Sign-On domain so that the source vCenter Server can authenticate to the destination vCenter Server." If having them both in the same SSO domain isn't an option check out: http://www.virtuallyghetto.com/2016/05/automating-cross-vcenter-vmotion-xvc-vmotion-between-the-same-different-sso-domai… Regarding using x-vC vmotion vs vSphere Replication it really just depends on your requirements (# of VMs, size of VMs, bandwidth available, downtime ok?, etc, etc)
Here are a couple of examples: - If you are retaining 24 in a 1 day period, it will have 24 “slots” for retention in that day.  It will keep the most up to date instance for each 1 hour slot... See more...
Here are a couple of examples: - If you are retaining 24 in a 1 day period, it will have 24 “slots” for retention in that day.  It will keep the most up to date instance for each 1 hour slot.  If you have a 15 minute RPO you will have 96 slots, but it will keep only 24 of them.  So it will retain the most up to date replica *roughly* every 4 replicas. - If you have say a retention of 6 in a 1 day period, then you have 4 “slots” per day.  It will retain the most up to date replica every 6 hours.  If you have a 15 minute RPO it will retain roughly every 24th replica. The oldest snapshot will be discarded to retain the newest and stay under the 24 snapshot limit.
vSphere Replication supports replicating virtual mode RDMs. They are converted to VMDKs upon recovery though (which would mean that they likely wouldn't work on an ongoing basis for a SQL cluster... See more...
vSphere Replication supports replicating virtual mode RDMs. They are converted to VMDKs upon recovery though (which would mean that they likely wouldn't work on an ongoing basis for a SQL cluster). If you could use regular vmdks with your SQL cluster (which would mean that you weren't using the multi-writer capability of physical mode RDMs) you could replicate it with vSphere Replication. The requirements of VR are the same regardless of the workload running on it. Here is the quote from the kb: "vSphere Replication does not work with virtual disks opened in "multi-writer mode". MSCS cluster virtual machines are configured with virtual disks opened in "multi-writer mode," so vSphere Replication will not work with a MSCS configuration." The only way that I can see vSphere Replication protecting a SQL cluster using physical mode RDMs is in the one-time situation that the articles outline, not on an ongoing basis. For that you would need to look at array-based replication.
This isn't possible with SRM. Just curious, why wouldn't running a test of the SRM recovery plan work in this case? That would let them test the workloads at the recovery site and not copy cha... See more...
This isn't possible with SRM. Just curious, why wouldn't running a test of the SRM recovery plan work in this case? That would let them test the workloads at the recovery site and not copy changes back to the protected site.
VM location would be the entire VM (vmx file, page file, vmdk (if not chosen separately)). The second selection is for the vmdk. By default it will use the target VM location or a different loca... See more...
VM location would be the entire VM (vmx file, page file, vmdk (if not chosen separately)). The second selection is for the vmdk. By default it will use the target VM location or a different location can be selected
That doesn't seem normal/by design and I wouldn't consider it expected. I have a similar configuration and I've never had to login or reconnect after the initial configuration. If you weren't usi... See more...
That doesn't seem normal/by design and I wouldn't consider it expected. I have a similar configuration and I've never had to login or reconnect after the initial configuration. If you weren't using the same SSO domain at both sites you might have to login like you do in your 5.1 environment, but with the same domain you definitely shouldn't have to. That said, I don't know what would cause this. I would suggest opening an SR and looking through your logs.
Mattallford is correct, there is no issue with resizing a disk when using array based replication. This is because replication is done at the LUN level. With vSphere Replication replication is at... See more...
Mattallford is correct, there is no issue with resizing a disk when using array based replication. This is because replication is done at the LUN level. With vSphere Replication replication is at the VM level and VR runs into problems when the size of a disk on one end of the replication changes.
As of VR 6.0 this now uses a single port (Port Numbers that must be open for vSphere Replication 5.8.x and 6.x (2087769) | VMware KB) This was originally done to make it easier to QOS traffic... See more...
As of VR 6.0 this now uses a single port (Port Numbers that must be open for vSphere Replication 5.8.x and 6.x (2087769) | VMware KB) This was originally done to make it easier to QOS traffic separately for initial sync and ongoing. It was changed to use a single port to simplify management (firewall, etc).
Just the number of development hours it would take, other priorities in the software (and elsewhere), needing to get the release out by a particular date, etc. We wanted to have support for other... See more...
Just the number of development hours it would take, other priorities in the software (and elsewhere), needing to get the release out by a particular date, etc. We wanted to have support for other PGs, and we still do, it just didn't come together for this release and we thought one was better than none
Note, SRM doesn't replicate VMs. That is handled by either the array or vSphere Replication. Also, SRM (and the replication solutions that support it) are designed to operate with one site (the p... See more...
Note, SRM doesn't replicate VMs. That is handled by either the array or vSphere Replication. Also, SRM (and the replication solutions that support it) are designed to operate with one site (the protected site) disconnected/unavailable. This is one of the most important aspects of SRM, each site is independent of the other from a DR recovery standpoint. 1.A Protected site: no impact to recovery. no impact to replication. see above 1.B Recovery site: Recovery cannot run without vCenter at the recovery site. array based replication and vSphere replication will not be impacted (no change in RPO) 2.A Protected site: no impact to recovery or replication. see above 2.B Recovery site: same as with vCenter, recovery cannot run without SRM server at the recovery site. array based replication and vSphere replication will not be impacted (no change in RPO) 3.A Protected site: no impact to recovery, no impact to VR or ABR replication 3.B Recovery site: it will depend if it is the VRS or the VRMS, if the VRMS then VMs cannot be recovered. Replication impact depends on if the VRS or VRMS is managing the VMs replication or not. See this article for more detail: vSphere Replication Appliance Failure Prevention and Recovery - VMware vSphere Blog Note that in the case of unrecoverable vCenter or SRM failure at the recovery site VMs could still be recovered manually as the VMs data is still at the recovery site. This is why you want to protect/backup your vCenter, SRM server and VRMS/VRS at both your protected and recovery sites. Also keep in mind that losing both your protected site (and needing to run a DR plan) and losing vCenter and/or your SRM server at the recovery site is very rare. Obviously it doesn't hurt to prepare for it though.
SRM doesn't change a VM's MAC address as part of recovery. It is stored in the VMs VMX file so regardless of how the VM is replicated it will stay the same.
This should work without issue. I would just recommend a change in order to: 1 (also remove the LUN from the PG), 0, 2, 3 If you are going to leave the VM on the same LUN then step 0 wouldn... See more...
This should work without issue. I would just recommend a change in order to: 1 (also remove the LUN from the PG), 0, 2, 3 If you are going to leave the VM on the same LUN then step 0 wouldn't be required, but recommended (see below). If want to leave the LUN replicating and can just svmotion the VM, that would work as well. From a data protection standpoint, as long as you understand the potential complications, I would actually recommend starting with step 2. Technically you don't want to have the same VM replicated more than once, in this case since you are transitioning, and if you understand that recovering the VM using one method will break the other, and that you can't have the VM protected with both (VR & ABR) in SRM, you should be ok. Hope this makes sense. If not let me know.
Here is how I'd recommend doing it: -          You specify VSAN datastore at recovery site as your placeholder datastore (because you can’t specify a folder) -          When you replicate VMs... See more...
Here is how I'd recommend doing it: -          You specify VSAN datastore at recovery site as your placeholder datastore (because you can’t specify a folder) -          When you replicate VMs to the recovery site, you place them in a folder on the VSAN datastore at the recovery site called something like “VMs” or "Replicated VMs" -          Store all your regular VMs that are running at the recovery site in the folder above (or another folder). This way you don’t end up with multiple folders of the same name in the root of your VSAN datastore. This won't cause any problems other than it being confusing for you. It would be much better if the placeholder VMs could be placed in a folder (and this may come in a future version) but this is the next best thing (other than putting the placeholders on some other kind of shared storage).
This is normal/expected behavior. This is caused by the VSS quiescing and the VM snapshot that it requires. If your application doesn't require VSS quiescing I wouldn't recommend using it. Mo... See more...
This is normal/expected behavior. This is caused by the VSS quiescing and the VM snapshot that it requires. If your application doesn't require VSS quiescing I wouldn't recommend using it. Most modern applications and operating systems handle crash consistency just fine (think about what happens to your application if the host the VM is running on crashes, or the OS bluescreens, does your application end up with corrupt data?).
When you recover the VM it will recover to the most recent replica (not PIT) and the 24 PITs will show up as snapshots in the snapshot manager. Then you would choose the snapshot you wan... See more...
When you recover the VM it will recover to the most recent replica (not PIT) and the 24 PITs will show up as snapshots in the snapshot manager. Then you would choose the snapshot you wanted to go back to, select it and "revert to", and delete any remaining snapshots.