VDP appliance oldest checkpoint as validated not latest checkpoint are validated. Any process to check specific checkpoint manually?
Thanks in advance.
Hi
Welcome to the communities.
will you follow the below steps
using Putty console log on to the appliance and check the /data01/cur/err.log
Take care!
1) Using Putty / SSH or Console, log on to the appliance as root user.
2) run command 'cplist' (you should get output like below)
root@VDP:~/#: cplist
cp.20140123160156 Thu Jan 23 09:01:56 2014 valid rol --- nodes 1/1 stripes 77
cp.20140123160631 Thu Jan 23 09:06:31 2014 valid --- --- nodes 1/1 stripes 77
The top checkpoint is the validated one (shows valid rol).
The next checkpoint is not validated.
It is normal that the checkpoint created at the beginning of the maintenance is the validated one, and at the end of the maintenance cycle it is not validated.
3) You can also run the command 'status.dpn' (you should get output like below)
root@VDP:~/#: status.dpn
Thu Jan 23 13:47:45 MST 2014 [VDP] Thu Jan 23 20:47:45 2014 UTC (Initialized Thu Dec 12 23:20:23 2013 UTC)
Node IP Address Version State Runlevel Srvr+Root+User Dis Suspend Load UsedMB Errlen %Full Percent Full and Stripe Status by Disk
0.0 10.7.X.Y 7.0.81-86 ONLINE fullaccess mhpu+0hpu+0hpu 2 false 0.70 2910 1799593 1.5% 1%(onl:27 ) 1%(onl:25 ) 1%(onl:25 )
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable
All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0hpu)
System-Status: ok
Access-Status: full
Last checkpoint: cp.20140123160631 finished Thu Jan 23 09:06:55 2014 after 00m 24s (OK)
Last GC: finished Thu Jan 23 08:01:56 2014 after 00m 07s >> recovered 341.59 KB (OK)
Last hfscheck: finished Thu Jan 23 09:05:35 2014 after 03m 17s >> checked 46 of 46 stripes (OK) <<<<<<<<*********** CHECK HERE
Maintenance windows scheduler capacity profile is active.
The maintenance window is currently running.
Next backup window start time: Thu Jan 23 20:00:00 2014 MST
Next maintenance window start time: Fri Jan 24 08:00:00 2014 MST
If you look at the Last hfscheck line, it should show that it finished successfully.
If the last hfscheck did NOT finish successfully, and it shows an error, it is best to contact VMware support immediately to prevent potential data loss.
Hi,
Current cplist output is as follows:
cp.20131223055205 Mon Dec 23 11:22:05 2013 valid hfs --- nodes 1/1 stripes 607
cp.20140109082710 Thu Jan 9 13:57:10 2014 valid --- del nodes 1/1 stripes 2732
cp.20140110075541 Fri Jan 10 13:25:41 2014 valid --- del nodes 1/1 stripes 2761
cp.20140111092734 Sat Jan 11 14:57:34 2014 valid --- del nodes 1/1 stripes 2820
cp.20140122040206 Wed Jan 22 09:32:06 2014 valid --- --- nodes 1/1 stripes 3617
cp.20140122062605 Wed Jan 22 11:56:05 2014 valid --- --- nodes 1/1 stripes 3617
hfs is oly present for dec23 not for jan 22
A second line of thought - are you running these checkpoints manually? If so, it is possible that the cp got suspended permanently. Check this KB:
VMware KB: Automatic integrity checks or scheduled backups do not start in vSphere Data Protection
I would suggest opening a case with VMware support to have them look into the hfscheck process failing if the above is not the solution.
If you SSH / putty / console to the appliance, look at /data01/checklogs/cp.{date}/hfscheckresults to get an idea of why the checkpoint is not validating.
the command 'status.dpn' should give you information about whether the cp / hfscheck processes are suspended, and also might give information about why the hfscheck is failing.
Performed steps given in KB & started integrity check manually. let see.