VMware Cloud Community
cintiadg
Contributor
Contributor

Integrity Check Error

Hi,I am having some issues with my VDR (version 1.1)

In the beggining I tried to run the Integrity Check and got this error:

  • Integrity check failed for the restore point created on (date) (time) for (VM name)

  • Backup Set (destination name) will be locked until the restore point with errors are deleted and integrity check succeeds.

I had marked for delete the damaged restore point and tried to run the integrity check again, but I had the same issue (and the damaged restore point was still there)

So, I oppened the VDR console, stopped the DataRecovery service, deleted the configuration file, started the service, mounted the datastore, recreated the backup job and tried to run the integrity check again. It failed again.

The message below also appears in the VDR log:

  • Trouble reading from destination volume, error -2241 ( Destination index invalid/damaged)

  • Backup Set (destination name) will be locked until the restore point with errors are deleted and integrity check succeeds.

After some researches, I read in the KB that I had to power off VDR in order to delete the BackupStore.cat and BackupStore.cat.bak files and start an integrity check again, so the engine would rebuild the catalog and I would be able to run the Integrity Check.

I did it, as they were saying, but the problem persisted.

Looking in the KB again, I´ve found that I had to remove the lock file(_store.lck_), but there isn´t any lock file inside the BackupStore directory.

Conclusion: I can´t backup the VMs because I can´t run neither the Integrity Check (even when I mark the damaged restore point for delete) not the Recatalog.

Reply
0 Kudos
15 Replies
RParker
Immortal
Immortal

I hate to break it to you, but you are probably going to have to delete the target, recreate it, and attach it to the VDR again. Basically start over with backups. You configuration will be there (don't delete the entire appliance, just the target datastore).

fgl
Enthusiast
Enthusiast

Cintiadg,

Rparker is right you will need to delete, recreate and start over. I had exactly this problem almost verbatim just a few weeks ago and I tried exactly what you tried but at the end I had to completely start over from scratch and in effect losing months worth of backups.

cintiadg
Contributor
Contributor

start all over?

Isn´t there any other option to solve the issue without having to lose all the backups? ?:|

Thank you so much for the help!

Reply
0 Kudos
kcucadmin
Enthusiast
Enthusiast

start all over?

Isn´t there any other option to solve the issue without having to lose all the backups? ?:|

I'm in the same boat, well atleast this time i got a good 2 months out of the destination before i had to dump it and start over.

Reply
0 Kudos
fgl
Enthusiast
Enthusiast

Unfortunely not that I know of at least with the current 1.1 version. The last time it happen to me I spent 2 weeks looking for a way to fix it without starting over but no luck. If someone else knows of any tips or tricks for these problems please chime in.

Reply
0 Kudos
parkut
Contributor
Contributor

I hate to say "me too"... But there does not seem to be any recovery. Several weeks ago I spent considerable time with vmware support, who eventually confessed there was nothing I could do except to start over. And to be teased with the hint that a new version was currently in beta for release "soon".

My configuration has two backup jobs, with the default minimum retention time selected. The target datastores are 500 gb virtual disks on two different NAS devices. The backups run normally for 1-2 weeks, then the VDR appliance will hang during an integrety check. Rebooting the appliance sometimes resolves the problem, other times not. When the datastore is corrupted, there is no recovery from the locked condition, not even if you try to delete the corrupt restore points.

I have discovered that the easiest recovery is to reformat the corrupt target datastore, and re-mounting it allows VDR to work until the next failure. Since the two backup jobs fail on differing dates, so far (cross my fingers), I always have a current restore point (on one or the other NAS).

Reply
0 Kudos
jketron
Enthusiast
Enthusiast

VDR is a lightweight solution that should only be used test environments. It will get better :smileygrin:

Reply
0 Kudos
kcucadmin
Enthusiast
Enthusiast

VDR is a lightweight solution that should only be used test environments. It will get better :smileygrin:

Funny they dont label it as that on the Sales material....

http://www.vmware.com/products/data-recovery/

i mean looking at that site i really dont get the impression it's for "Labs" only...

Reply
0 Kudos
LouwP
Contributor
Contributor

Maybe read the manual before deciding.... it's said right upfront.

Trusting sales reps for purchasing decisions has always been a "bad thing"

Reply
0 Kudos
kcucadmin
Enthusiast
Enthusiast

Maybe read the manual before deciding.... it's said right upfront.

Trusting sales reps for purchasing decisions has always been a "bad thing"

NO where in the manual does it say, that this is for "Test Lab" environments only.

In all the material i read, it is presented as a viable small scale backup solution.

The way VMWare marketed this product, made it come across as a finished product and a reason to upgrade to the essitals plus, advanced, or enterprise license as a "Feature".

I wasn't just listening to sales reps, but sales engineers as well. which we all know isn't really any better.

Reply
0 Kudos
Sofoski
Contributor
Contributor

I had this error today when attaching some destination VMDK disks to new VDR 1.2 appliances. Out of four disks this happened on one and thinking back there were some damaged backups that I'd marked for delete and the reclaim should have removed them.

Regardless, I tried a few things however what seems to have worked and gotten an integrity check to run is the following:

- Login to VDR appliance console

- Navigate to the mounted destination VMDK and go into the VMwareDataRecovery/BackupStore directory

- Remove the store.lck directory and it's contents (rm -rf store.lck)

In my case there was one lock file in the store.lck directory and it was dated the day that I'd marked the items for delete / run the reclaim.

YMMV and I am not a VMware support tech. but perhaps this may help others.

Cheers,

Andrew

VCP 3, 4 in training.

Reply
0 Kudos
tdubb123
Expert
Expert

why is this happening all the time? I had the problem with vdr 1.1 and upgraed to 2.0. but still having the same problems after running it for a while.

now I have to delete my whole VMwareDataREcovery folder on my cifs share and remount.

is this the only fix to this problem? I lose all my backups

Reply
0 Kudos
Paul11
Hot Shot
Hot Shot

Reply
0 Kudos
cag201110141
Enthusiast
Enthusiast

You can try this solution:

http://communities.vmware.com/message/1892006#1892006

This will work for a while but at some point you will still have no datastore and start over again.

This I found seems to be related to the ratio of VM's to backup vs the size of the datastore.

We have a  a  1.0 ratio and a    1.37 ratio were both datastore need to be reformatted and start over again. usually fails 30-45 days.

Reply
0 Kudos
racom
Enthusiast
Enthusiast

The above link doesn't work for me. It looks like I'm not authorized to view it. :smileyconfused:

My recent problems started a week after I've extended the datastore. I redirect the backup tasks to second datastore and it continues well. Second datastore is the same size as original datastore before extending. So I'm not sure if problem is related to ratio.

Reply
0 Kudos