HI all, at the final stages of a proof-of-concept with a pair of Falconstor NSSVA's replicating an environment, most of the SRM side of things is now configured but at the point where I'm trying to create a Protection Group, my Array Pair isn't showing up (just a blank area where it should be).
So far I've:
What else can I be missing? The only thing that springs to mind is patch-levels of the FalconStor appliances (currently 6.15, Build 6164).
Because we have physical FalconStor devices (which is why I'm using these within the lab) I have access to patches if required but the first few I tried installing errored out with no explanation so I don't want to go breaking too much.
As I said, we have an existing physical environment around 6.15 so upgrading to 7 isn't an option in the timeframe I'm working to.
It's so frustrating to be this close but not able to get any further!
Any pointers would be VERY much appreciated!
When you connected the array pairs, it should show a list of the volumes that are setup for replication. If you dont see any then there are no storage volumes that are set up for replication.
If you can see the volumes in the array pair, but not be able to create protection groups, it might be if you dont have any VMs on the datastores that are being replicated.
A screenshot of the same would also be helpful.
Do you already have VMs up and running in this environment? I suggest placing at least 1 or 2 VMs on replicated storage... Also I tend to restart all SRM/SRA services before I start working with SRM, as more than often in the past I noticed this was required.
Thanks for the responses both. I got all excited as I suspected I may not have had a VM on the replicated storage. I did but it wasn't powered on so gave that a go but with no change.
I've since added a second VM, followed by a restart of both SRM services (i.e. Protected & Recovery sites) but still no change.
I've just taken a series of screen-grabs (although I've added another VM since the included grab) so hopefully someone might see something I'm missing.
check the vmware-dr.logs of your SRM Servers, they should contain some information about the problem.
You could search for
to start the investigation of the problem.
Thanks for the pointer Ralf.
I've taken a look and, as a first-time user of SRM, find conflicting info in the log.
On the one hand it says "No replicated datastores found for array pair 'array-pair-3199'" (not sure where it gets 3199 from) but then further down it says "Continuous replication is enabled on device 4" and "CDR resource size = 10240 MB for device 4".
So, I'm still not sure if there's a genuine error.
Can anyone take a look at the attached copy of the logfile and let me know if something jumps out at them?
I am not sure if your inventory mappings are correct from Site B to Site A. Can you ensure that all the mappings are done the same way you have implemented the inventory mappings from Site A to Site B ?
Also looking through the logs, the message you mention are about replication setup on storage, is the replication setup dual way or one way. because from the logs its only showing one way replication.
You are indeed correct, I didn't have B->A mappings but have now. I restarted the SRM services just in case as well but no difference.
I believe I configured the replication both ways (there are timemarks on the recovery site falconstor) but can't be 100% sure so I'm about to rip the drives out & add them again...
Still no joy, having completely removed & re-created a new VMFS (right down to deleting from FalconStor appliances & creating afresh).
One thing I'm not sure about is, assuming for the purposes of this exercise I'm "protecting" a single VMFS volume (and it's the only volume that exists other than one place-holder volume per FalconStor).
Further to an earlier query re: having replication in both directions, I now don't believe this is necessary/possible anyway, assuming my scenario above is correct, as only the "owning" host(s) will be making any changes to the volume at any given time. It's only when the array manager "flips" to the recovery site when writes are made there, which will then (presumably) replicate back to the Protected Site once it's back up. At least, this is how things look when you look at replication within the FalconStor managment interface.
Does anything look missing/incorrect? I'm hoping I'm simply missing something fundamental here...
If there is a DR scenario, on Site B, the replicated volume is mounted as a VMFS datastore to the servers mapped in Inventory Mapping ( the backend zoning needs to be in place for this and lun assinged to the servers ). So the data is now written to the volume which has been replicated from Site A. When site A comes backup, it doesnt switch over automatically. If you have replication setup from Site B - Site A on the volume or the failback option selected in SRM , the data is first copied over for data consistency and then a failback is done.
I am not familiar with Falconstore so not sure how its setup. In any case once the array configuration is done, the array pair should communicate and let you create a protection group with the datastores on the replicated storage.
I forgot to point out that I was working to a guide at http://communities.vmware.com/docs/DOC-11410 for installation (it seems to be a slightly earlier verision of the FalconStor appliance but the principles appear the same).
Just as I was about to post this I thought I'd double-check my config against the document again and spotted that I hadn't acctually created the "storage volumes" for the FalconStor VM on a different vSCSI adaptor. It was a long-shot as to whether that was the cause but I thought I might as well correct it anyway (so effectively creating all the storage/replication again from scratch) but still no joy.
Once again I can get to the point where it's happy with the Array Pair but just nothing visible when creating a protection group.
I'll have a hard time convincing managment to spend on SRM if I can't get a basic test working in a Lab
What version of SRM are you testing with ?
I've just seen your post and interestingly I have the exact same issue with a test environment I'm building, the only difference is that I'm using the EMC Celerra UBER Appliance to provide storage. I have used the same configuration before with no issue.
Hi Paul ,
and even for my own curiosity, you aren't using nested ESX Servers for your SRM environment?
Because I got exactly the same problem with virtualized ESXi Servers running on latest code.
Regards from germany,
had a similar issue with a customer. the array pair was replicated but the protection groups could not be created. We had VMs which were powered on (no os no data ). Still no luck, In the end we just had to wait for the replication to complete before we could do anything
Can you kickoff a manual datastore refresh on all your hosts. Try putting in a couple of machines powered on with data on them.try with something like a vMA which only takes up about 5 GB ...:smileyconfused:
Hope this information helps with your issue:
As mentioned before I was experiencing the exact same issue as you the only difference being that I'm using VNX SRA. The SRA configuration for VNX includes an optional entry for specifing what IP Address to use for NFS presentation, in order to fix my issues I made sure that this entry matched the IP Address I was using when mounting the NFS datastore. Originally I was also using a DNS name to mount the NFS datastore which I altered to be a IP Address.
So just to clarify I did the following in the ordered shown:
- Reconfigured Array setting by adding an IP Address for NFS presentation
- Unmount the existing NFS
- Remounted the NFS datastores using IP Addresses
I've since found out that all SRA adapters that support NFS currently don't play well with DNS.
Ii'm experiencing the same problem with nested esx servers, is there any workaround ?
Are you using NFS ?
I'm using ISCSI.
Did you add your array pairs in SRA Database?
You might have installed SRA in both SRM servers. You should register array pairs in SRA database.
Go through link::https://communities.vmware.com/docs/DOC-11410
Hope this helps.