VMware Cloud Community
kcucadmin
Enthusiast
Enthusiast

VDR 1.1 How to Remove Damaged Restore Points

I need to remove some damaged restore points, but what do i need to do after i mark them for delete, i.e. how do i delete them or run a manual reclaim job?

also.

what is the ramification to restore points that exist after the damaged restore point, since backups are incrementals, how can i restore from anything post the dmg'd restore point since it would be missing some information?

0 Kudos
11 Replies
jim3cantos
Contributor
Contributor

For first question, just execute an integrity check manually.

kcucadmin
Enthusiast
Enthusiast

yeah that didn't work. i gave up i just deleted my dedup store, when it takes 14 hours to run an integrity check somthing is wrong.

0 Kudos
ubersol
Contributor
Contributor

Hello-

I was wondering how you initiated integrity check. I posted this as a question but I need to resolve this issue right now. I am running into a similar issue as yours and I need to manually start integrity check.

Thanks.

0 Kudos
kcucadmin
Enthusiast
Enthusiast

from the vsphere client, go to vmware data recovery add-in. connect to VDR appliance. login. click the configuration tab. goto destinations.

select the datastore you want to perform check on. make sure it's mounted. then perform integrity check (menu at top) or (right click menu).

0 Kudos
tugi
Contributor
Contributor

You can find the procedure that your need.

Remove Damaged Restore Points

Corrupt restore points, which are identified during integrity checks, should be removed. Restore points may

be identified as damaged during transient connection failures. If transient connection failures are possible,

check if damaged restore point issues are resolved after connections are restored.

Prerequisites

Before you can remove damaged restore points, you must have restore points in a functioning Data Recovery

deployment.

Procedure

1 In the vSphere Client, select Home > Solutions and Applications > VMware Data Recovery.

2 Click the Reports tab and double-click the integrity check that failed.

The Operations Log for the event opens in a separate window. Note which restore points triggered the

failure.

3 Close the Operations Log and click the Restore tab.

4 From the Filter dropdown list, select Damaged Restore Points.

Available restore points are filtered to display only the virtual machines with damaged restore points. It

may be necessary to expand a virtual machine's node to display the damaged restore point.

5 Select damaged restore points for removal and click Mark for Delete.

6 Initiate an integrity check.

Completing an integrity check causes all restore points marked for deletion to be removed.

7 Review the results of the integrity check to ensure no damaged restore points remain.

Turgay.

0 Kudos
zemitch
Enthusiast
Enthusiast

(vdr 1.2.0.1131)

This procedure does not work. I always have errors when peroformingintegrity check. Damaged restore points are not deleted.

28.10.2010 15:52:18: Executing Recatalog

28.10.2010 15:52:18: To Backup Set /SCSI-0:1/...

28.10.2010 16:18:03: Starting full integrity check

28.10.2010 19:32:57: Integrity check failed for the restore point created on 06.10.2010 00:19:20 for.....

28.10.2010 19:32:57: Integrity check failed for the restore point created on 06.10.2010 21:00:02 for ....

28.10.2010 15:32:27: Executing Integrity Check

28.10.2010 15:32:27: To Backup Set /SCSI-0:1/...

28.10.2010 15:47:56: Trouble reading from destination volume, error -2241 ( Destination index invalid/damaged)

28.10.2010 15:47:56: Backup Set "/SCSI-0:1/" will be locked until the restore point with errors are deleted and integrity check succeeds.

7.10.2010 07:45:01: Executing Integrity Check

27.10.2010 07:45:01: To Backup Set /SCSI-0:1/...

27.10.2010 08:26:27: Starting full integrity check

27.10.2010 11:16:29: Integrity check failed for the restore point created on 06.10.2010 00:19:20 for ....

27.10.2010 11:16:29: Integrity check failed for the restore point created on 06.10.2010 21:00:02 for ....

27.10.2010 11:16:29: Integrity check failed for the restore point created on 06.10.2010 21:00:05 for ....

How can I manually delete these damaged items?

Thanks in advance.

0 Kudos
KennyView
Contributor
Contributor

Has any found a solutions for this ?

Same issue over here, damaged restore points are marked for deletion, however Integrity Check does not delete them.

12/17/2010 4:13:16 PM: Executing Integrity Check
12/17/2010 4:13:16 PM: To Backup Set
12/18/2010 4:32:57 PM: Starting full integrity check
12/19/2010 6:43:05 AM: Integrity check failed for the restore point created on 12/6/2010 9:19:44 AM for
12/19/2010 6:43:18 AM: Integrity check failed for the restore point created on 12/7/2010 11:02:36 PM for
12/19/2010 6:43:30 AM: Backup Set  will be locked until the restore point with errors are deleted and integrity check succeeds.
12/19/2010 6:43:31 AM: 2 task errors
12/19/2010 6:43:31 AM: Completed: 211 files, 859,8 GB
12/19/2010 6:43:31 AM: Performance: 3812 MB/minute
12/19/2010 6:43:31 AM: Duration: 1.14:30:08 (00:00:42 idle/loading/preparing)

12/15/2010 10:02:59 AM: Executing Integrity Check
12/15/2010 10:02:59 AM: To Backup Set ......
12/15/2010 11:07:23 PM: Starting full integrity check
12/16/2010 12:24:35 PM: Integrity check failed for the restore point created on 12/6/2010 9:19:44 AM for
12/16/2010 12:24:44 PM: Integrity check failed for the restore point created on 12/7/2010 11:02:36 PM for

12/16/2010 12:24:53 PM: Backup Set will be locked until the restore point with errors are deleted and integrity check succeeds.
12/16/2010 12:24:55 PM: 2 task errors
12/16/2010 12:24:56 PM: Completed: 211 files, 859,8 GB
12/16/2010 12:24:56 PM: Performance: 5570 MB/minute
12/16/2010 12:24:58 PM: Duration: 1.02:21:51 (00:01:27 idle/loading/preparing)

0 Kudos
glowle
Contributor
Contributor

Has anyone found a solutions for this ?

I have exactly the same issue here too, damaged restore points are marked for deletion, however Integrity Check does not delete them.

Starting to lose patience with this now, I'm on the brink of blowing away my volumes and starting afresh.

Gavin.

0 Kudos
GenPT
Contributor
Contributor

I had a similar issue, but it turned out that my CIFS backup store was out of space.  I had the SAN administrator grow the share and it started working.

0 Kudos
OldUberGoober
Contributor
Contributor

At least in my case, it appears to have been a backup filesystem that needed an fsck.  Stopped the VDR daemons, umounted the filesystem, and ran an fsck and it seems much happier now.  The clue was a message on the console of the VDR appliance.

0 Kudos
glowle
Contributor
Contributor

I am on annual leave, returning to work on Tuesday 2nd April 2013. I will deal with your email upon my return.

Regards,

Gavin.

0 Kudos