VMware Cloud Community
trogne
Enthusiast
Enthusiast

one command to free up space in /storage/seat

I was trying to free up space in '/storage/seat on my vcsa 6.5

This is not applicable for 6.5 :

vCenter Server Appliance 6.0 services fail due to 100% usage of /storage/seat disk (2119809) | VMwar...

I don't have "vpx_event" and "vpx_event_arg" table.

Instead I have "vpx_event_1" to "vpx_event_92", and "vpx_event_arg_1" to "vpx_event_arg_92".

So I ran "truncate table vpx_event_1 cascade;" for each "event" tables (184 tables!).

/storage/seat/ was back to 1%.

But there's also this command :

sudo -u postgres /opt/vmware/vpostgres/9.4/bin/vacuumdb -a -e -v -f -U postgres > /tmp/vacuumdb.log

(as suggested at the bottom of this article : http://www.settlersoman.com/how-to-reclaim-space-on-storageseat-in-vcenter-appliance-5-5-or-6-x/ )

But I don't have a "postgres" user. Is that ok ?

So I tried that :

sudo -u root /opt/vmware/vpostgres/9.4/bin/vacuumdb -a -e -v -f -U postgres > /tmp/vacuumdb.log

And entered the password returned by this command :

cat /etc/vmware-vpx/embedded_db.cfg | grep

But I don't see anything changed. Normal ?

When I tried running it before the "truncate" , there was no change to "storage/seat" either.

Thanks

11 Replies
srodenburg
Expert
Expert

I have the exact situation. KB Article 2119809  says it applies to VCSA 6.5 but it does not.

Running the vacuum command just shits itself. It hangs after a couple of hours and remains frozen until I break it off. The result is "nothing happened". I have been succesfull with the truncate command but one really needs to go through all 184 tables.

Reply
0 Kudos
Chris_K1
VMware Employee
VMware Employee

Note that this should be addressed using one of the scripts attached to the following KB

KB/2110031 - Selective deletion of tasks, events, and historical performance data in vSphere 5.x and 6.x

Reply
0 Kudos
dzak64
Enthusiast
Enthusiast

I have the same issue. Tried using the script listed in the KB2110031 article. I saw no change in the /storage/set directory.

/opt/vmware/vpostgres/current/bin/psql -U postgres -v TaskMaxAgeInDays=30 -v EventMaxAgeInDays=30 -v StatMaxAgeInDays=30 -d VCDB -t -q -f /tmp/2110031_Postgres_task_event_stat.sql

Any other suggestions?

Reply
0 Kudos
marsherian
Enthusiast
Enthusiast

I ran into the same issue, I ended up manually resizing the volume:

PCLI:

Get-VM vc-01 | get-HardDisk  |? Filename -like "*vc-02_7.vmdk"  | Set-HardDisk -CapacityGB 50

ssh root@vc-01

service-control --stop --all

umount /storage/seat/

#Test physical volume resize

pvresize --setphysicalvolumesize 50G -v -t /dev/sdh

#resize the volume

pvresize --setphysicalvolumesize 50G -v  /dev/sdh

#resize the logical volume, probably left more room than needed.

lvresize -L 48G -r -v  /dev/seat_vg/seat

mount  /dev/seat_vg/seat

df -h

service-control --start --all

Reply
0 Kudos
yikching
Contributor
Contributor

Apparently, the table structure in 6.5 is different from earlier versions.

Here is my psql used to loop thru the vpx_event_[numeric] tables and clear them out.

DO

$$

DECLARE

    rec   record;

BEGIN

   FOR rec IN

      SELECT *

      FROM   pg_tables

      WHERE tablename ~ '^vpx_event_[0-9].*'

      ORDER  BY tablename

   LOOP

      EXECUTE 'TRUNCATE TABLE '

        || quote_ident(rec.schemaname) || '.'

        || quote_ident(rec.tablename) || ' CASCADE';

   END LOOP;

END$$;

-The CASCADE command above should clear vpx_event_arg_[numeric] also, but sometimes it doesn't and following have to be used also.

DO

$$

DECLARE

    rec   record;

BEGIN

   FOR rec IN

      SELECT *

      FROM   pg_tables

      WHERE tablename ~ '^vpx_event_arg_[0-9].*'

      ORDER  BY tablename

   LOOP

      EXECUTE 'TRUNCATE TABLE '

        || quote_ident(rec.schemaname) || '.'

        || quote_ident(rec.tablename) || ' CASCADE';

   END LOOP;

END$$;

-For vaccumdb, here is the command I used after running the above.

/opt/vmware/vpostgres/current/bin/vacuumdb -a -e -v -f -U postgres > /tmp/vacuumdb.log

twoton
Enthusiast
Enthusiast

We've been having this issue repeatedly over the last month and I hoped the latest round of patches would fix it, but it didn't.  I got sick of having to increase the vCenter VMDK by a gb each time and then running the sql commands to fix it, so I took yikching's commands and put them into a .sql file in roots home directory on the VCSA, then I added a file to /etc/cron.daily that calls the sql script to run the cleanup job each night.  This should keep it from filling up again and me having to manually intervene to fix.  I also made a copy of the dbcleanup.sh script in roots home directory, so if I do need to run it manually it's as easy as typing ./dbcleanup.sh. 

root@vcenter[ ~ ]# cat /etc/cron.daily/dbcleaup.sh

#!/bin/bash  

/opt/vmware/vpostgres/current/bin/psql -d VCDB postgres -f /root/cleanup.sql

root@vcenter[ ~ ]# cat /root/cleanup.sql

DO

$$

DECLARE

    rec   record;

BEGIN

   FOR rec IN

      SELECT *

      FROM   pg_tables

      WHERE tablename ~ '^vpx_event_[0-9].*'

      ORDER  BY tablename

   LOOP

      EXECUTE 'TRUNCATE TABLE '

       || quote_ident(rec.schemaname) || '.'

       || quote_ident(rec.tablename) || ' CASCADE';

   END LOOP;

END$$;

DO

$$

DECLARE

    rec   record;

BEGIN

   FOR rec IN

      SELECT *

      FROM   pg_tables

      WHERE tablename ~ '^vpx_event_arg_[0-9].*'

      ORDER  BY tablename

   LOOP

      EXECUTE 'TRUNCATE TABLE '

        || quote_ident(rec.schemaname) || '.'

        || quote_ident(rec.tablename) || ' CASCADE';

   END LOOP;

END$$;

RyanSeeley
Contributor
Contributor

Thanks for posting this. Fixed my issue.

Ran this before:

/opt/vmware/vpostgres/current/bin/psql -d VCDB -U postgres

SELECT nspname || '.' || relname AS "relation", pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"

FROM pg_class C

LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)

WHERE nspname NOT IN ('pg_catalog', 'information_schema')

AND C.relkind <> 'i'

AND nspname !~ '^pg_toast'

ORDER BY pg_total_relation_size(C.oid) DESC

LIMIT 20;

      relation       | total_size

---------------------+------------

vc.vpx_event_arg_1  | 302 MB

vc.vpx_event_arg_92 | 297 MB

vc.vpx_event_arg_4  | 295 MB

vc.vpx_event_arg_84 | 295 MB

vc.vpx_event_arg_83 | 294 MB

vc.vpx_event_arg_5  | 294 MB

vc.vpx_event_arg_2  | 293 MB

vc.vpx_event_arg_90 | 293 MB

vc.vpx_event_arg_3  | 292 MB

vc.vpx_event_arg_78 | 292 MB

vc.vpx_event_arg_80 | 292 MB

vc.vpx_event_arg_89 | 292 MB

vc.vpx_event_arg_88 | 292 MB

vc.vpx_event_arg_87 | 292 MB

vc.vpx_event_arg_82 | 291 MB

vc.vpx_event_arg_81 | 291 MB

vc.vpx_event_arg_86 | 291 MB

vc.vpx_event_arg_91 | 291 MB

vc.vpx_event_arg_85 | 291 MB

vc.vpx_event_arg_79 | 291 MB

(20 rows)

After the script, looked a lot better

           relation           | total_size

------------------------------+------------

vc.vpx_proc_log              | 38 MB

vc.vpx_host_vm_config_option | 8584 kB

vc.vpx_topn_past_day         | 2904 kB

vc.vpx_task                  | 2488 kB

vc.vpx_topn_past_week        | 2336 kB

vc.vpx_topn_past_month       | 2152 kB

vc.vpx_journal_entry         | 2024 kB

vc.vpx_hist_stat3_25         | 1200 kB

vc.vpx_hist_stat3_24         | 1192 kB

vc.vpx_hist_stat3_22         | 1184 kB

vc.vpx_hist_stat3_23         | 1168 kB

vc.vpx_hist_stat3_26         | 1152 kB

vc.vpx_hist_stat3_27         | 1128 kB

vc.vpx_hist_stat3_8          | 1104 kB

vc.vpx_hist_stat3_11         | 1040 kB

vc.vpx_hist_stat3_12         | 1040 kB

vc.vpx_hist_stat3_2          | 1040 kB

vc.vpx_hist_stat3_14         | 1040 kB

vc.vpx_hist_stat3_18         | 1040 kB

vc.vpx_hist_stat3_16         | 1040 kB

(20 rows)

Reply
0 Kudos
twoton
Enthusiast
Enthusiast

Hi Folks-

Just an update... I found that you need to add "-h /var/run/vpostgres" to the psql command in order for it to run properly in a cron, otherwise the default socket that it tries to connect to isnt correct. 

root@vcenter[ ~ ]# cat dbcleanup.sh

#!/bin/bash  

/opt/vmware/vpostgres/current/bin/psql -d VCDB postgres -f /root/cleanup.sql -h /var/run/vpostgres

Reply
0 Kudos
CiaranFoster
Contributor
Contributor

Hi all.

I tried marsherian's approach first as I don't know how to run the SQL commands (more on that later).

I got as far as this step:

mount  /dev/seat_vg/seat

and the session just sat there for hours and hours so that was a failure.

I am now reverted to the snapshot from before I tried the above steps.

Next I saw this article and tried it:

VMware Knowledge Base

but, the hard drives on the VM are grayed out and won't let me expand them.  Note that this is a vCentre Appliance 6.5 running on an ESXi 6.0 host.

So now I'm back here looking to see what to do and looking at those SQL approaches.

But how do I execute those commands?

I am not a DBA so I don't know how I run those within a putty session.


Any help is appreciated.

Ciaran.

Reply
0 Kudos
twoton
Enthusiast
Enthusiast

You won’t be able to resize the disks while you have an active snapshot.  You will have to remove all snapshots, resize the VMDK, and then take a snapshot for a backup if you want one. 

once you get the disk expanded, I would just copy/paste those commands and save them in the script to make it pretty easy/painless. 

Reply
0 Kudos
CiaranFoster
Contributor
Contributor

Hi there.

Thanks for that and it was only when you said it that the lightbulb went off in my head!

But we managed to fix the issue with the use of yikching's SQL scripts above so kudos there.

I installed pgAdmin on my client and used the instructions here to get connected to the DB and run the commands:
http://www.vmwarearena.com/manage-vcsa-6-5-embedded-postgres-database-using-pgadmin-gui/

It took a while what with the need to setup the appliance firewall etc. but we got there and successfully reduced the disk-space on the '../storage/seat' from 97% down to 3%.

Stopping and starting up all services on the appliance then got it fully operational again.

But then overnight it went back to 18%.

So next I patched the hosts up to the latest 6.0 release - 15018929.

I've flushed the disk back down to 3% so will see how it goes over the weekend.

Thanks to all in this thread - I learned a lot in troubleshooting this and it was rewarding to get a solution.

Ciaran.

Reply
0 Kudos