gameto
Contributor
Contributor

Please, help my VDP DIsk full issues.

My VDP version is 6.1(based EMC).

VDP Backup schedule is not running, because that appliance state is "admin" status.

"gsan" service was degraded. From the logs I can see it might caused by disk full, but even I cleared all my restore points, disk space won't be reclaimed because appliance status is "admin".

Here are some of my VDP state, anyone can help? Thank you so much.

root@vdp:~/#: dpnctl status

Identity added: /home/dpn/.ssh/dpnid (/home/dpn/.ssh/dpnid)

dpnctl: INFO: gsan status: degraded

dpnctl: INFO: MCS status: up.

dpnctl: INFO: emt status: up.

dpnctl: INFO: Backup scheduler status: up.

dpnctl: INFO: axionfs status: down.

dpnctl: INFO: Maintenance windows scheduler status: enabled.

dpnctl: INFO: Unattended startup status: enabled.

dpnctl: INFO: avinstaller status: up.

dpnctl: INFO: [see log file "/usr/local/avamar/var/log/dpnctl.log"]

root@vdp:~/#:

root@vdp:~/#: status.dpn
Thu Nov  3 13:01:01 KST 2016  [vdp.kj1.netact.skt.com] Thu Nov  3 04:01:01 2016 UTC (Initialized Mon Feb 15 06:27:42 2016 UTC)
Node   IP Address     Version   State   Runlevel  Srvr+Root+User Dis Suspend Load UsedMB Errlen  %Full   Percent Full and Stripe Status by Disk
0.0     38.123.3.89  7.2.80-98  ONLINE fullaccess mhpu+0hpu+0000   1 false   0.30 14872 16371111  32.9%  32%(onl:1166) 32%(onl:1164) 32%(onl:1167)
Srvr+Root+User Modes = migrate + hfswriteable + persistwriteable + useraccntwriteable

System ID: 1455517662@00:50:56:BA:A1:74

All reported states=(ONLINE), runlevels=(fullaccess), modes=(mhpu+0hpu+0000)
System-Status: ok
Access-Status: admin

Checkpoint failed with result MSG_ERR_DISKFULL : cp.20161103033244 started Thu Nov  3 12:33:14 2016 ended Thu Nov  3 12:33:14 2016, completed 0 of 3497 stripes
Last GC: finished Thu Nov  3 12:32:44 2016 after 00m 30s >> recovered 0.00 KB (MSG_ERR_DISKFULL)
No hfscheck yet

Maintenance windows scheduler capacity profile is active.
  The maintenance window is currently running.
  Next backup window start time: Thu Nov  3 20:00:00 2016 KST
  Next maintenance window start time: Fri Nov  4 08:00:00 2016 KST
root@vdp:~/#:

root@vdp:~/#: df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/sda2        32G  6.0G   24G  20% /

udev            7.8G  180K  7.8G   1% /dev

tmpfs           7.8G     0  7.8G   0% /dev/shm

/dev/sda1       128M   37M   85M  31% /boot

/dev/sda7       1.5G  160M  1.3G  12% /var

/dev/sda9       138G   16G  115G  12% /space

/dev/sdg1       512G  499G   14G  98% /data01

/dev/sdh1       512G  497G   16G  98% /data02

/dev/sdi1       512G  499G   14G  98% /data03

root@vdp:~/#:

I think that issue cause are "/data01", "/data02", "/data03" locations.

But that locations have some checkpoint files.

I don't how delete that checkpoint files.

Please, help me. Thank you so much.

Best Regards,

Neal.Choi

0 Kudos
7 Replies
SavkoorSuhas
Expert
Expert

Could you provide me the output for:

avmaint nodelist | grep fs-per


cplist


cps



Suhas

If you found this or any other answer useful please consider the use of the Helpful or Correct buttons to award points. Don't Backup. Go Forward! Rubrik Peek into my Website: http://www.virtuallypeculiar.com
0 Kudos
gameto
Contributor
Contributor

root@vdp:~/#: avmaint nodelist | grep fs-per
        fs-percent-full="97.4"
        fs-percent-full="97.1"
        fs-percent-full="97.4"
root@vdp:~/#: cplist
cp.20161017000216 Mon Oct 17 09:02:16 2016   valid rol ---  nodes   1/1 stripes   3497
cp.20161017001937 Mon Oct 17 09:19:37 2016   valid rol ---  nodes   1/1 stripes   3497
root@vdp:~/#: cps

  GB used  %use  Total checkpoint usage by node:
1648.454        Total blocks on node           Mon Nov  7 14:12:33 2016
   44.825   2.72 Total blocks available
1037.171  62.92 cur                            Mon Nov  7 13:50:24 2016
  564.330  34.23 cur.1476663577                 Mon Oct 31 17:29:34 2016
    0.626   0.04 cp.20161017001937              Tue Oct 18 08:01:43 2016
    1.335   0.08 cp.20161017000216              Mon Oct 17 09:03:21 2016
1603.462  97.27 Total blocks used by dpn
root@vdp:~/#:

0 Kudos
jnsvano
Enthusiast
Enthusiast

Seems to be your disk(s) is really full. Did you try to delete some backups and then do garbage collection ?

0 Kudos
SavkoorSuhas
Expert
Expert

Going by cps, your cur directory is taking up a lot of space with 3497 data stripes in GSAN which would be nothing but the backup data stripes.

I would recommend you to delete certain old and not required restore points and let the appliance complete it's next maintenance to finish GC

You can then run status.dpn to see how much space is reclaimed by GC and the same should reflect on the partitions.

Suhas

If you found this or any other answer useful please consider the use of the Helpful or Correct buttons to award points. Don't Backup. Go Forward! Rubrik Peek into my Website: http://www.virtuallypeculiar.com
0 Kudos
gameto
Contributor
Contributor

I tried to delete all snapshot point.

But, that VDP server's real capacity haven't  increase.

0 Kudos
gameto
Contributor
Contributor

Hi Suhas,

How can I delete certain old and not required restore points?

Would you explain the to me?

BR

Neal.Choi

0 Kudos
SavkoorSuhas
Expert
Expert

1. Login to Web Client

2. Connect to vSphere Data Protection

3. Go to Restore Tab

4. Select the Virtual Client and in that you will receive a set of incremental backups.

The list is in new - old sort, so you can check the restore points that are quite old.

5. Delete these restore points using the Delete option after selecting the restore points to be discarded

6. Let the maintenance task complete for the appliance to finish, IC / GC / HFS

Suhas

If you found this or any other answer useful please consider the use of the Helpful or Correct buttons to award points. Don't Backup. Go Forward! Rubrik Peek into my Website: http://www.virtuallypeculiar.com
0 Kudos