Why my discourse postgres_data is too big

Today I was evaluating my discourse server and found two weird things.

1- If I understand correctly, the discourse postgres_data folder which is in /var/discourse/shared/standalone folder, is where discourse stores the database. Now this folder for my forum is about 8GB. Which I believe is too big for a humble forum. Can someone explain why its too big?

2- I have another folder named postgres_data_old that is also about 7GB. What is this for?

Also, my server memory was about 4GB. I found its mostly consumed. So I upgraded it to 8GB. Again I think a humble forum shouldn’t need that much of memory.

There was a postgres update recently, there is a old folder in case something went wrong.

If your forum works fine, you can execute this command

cd /var/discourse
./launcher cleanup app

It should clean the old postgres data folder.

For the memory, DIscourse works this way, it uses the most memory it can, you don’t have to worry about that.

If you want it to use less memory, you can change the db_shared_buffers in the app.yml (command : nano containers/app.yml)

6 Likes

what about the big database? Is it something wrong with my forum? I think its way too larger than it should be.

How many posts do you have ?

I’ll let a specialist give you an answer about that. I know that Discourse stores a lot of information to provide relevent statistics and a good search engine I guess. It may not be alarming.

You can run the following commands to see which tables are taking up the most disk space

./launcher enter app
su - postgres
psql discourse
SELECT nspname || '.' || relname AS "relation",
    pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
  FROM pg_class C
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
  WHERE nspname NOT IN ('pg_catalog', 'information_schema')
    AND C.relkind <> 'i'
    AND nspname !~ '^pg_toast'
  ORDER BY pg_total_relation_size(C.oid) DESC
  LIMIT 20;
5 Likes

This is the output of my forum for these commands. I think the first 5 rows are consuming too much space. I cant imagine why user_actions should be around 2GB. Or post timing about 1GB. Can you give me an idea what could be wrong?

On the other hand, is there any way to clean unnecessary data? For example maybe I can get rid of most of email_logs. I don’t send too many emails. I don’t know why this is so big.

 public.user_actions     | 1792 MB
 public.email_logs       | 1293 MB
 public.post_timings     | 731 MB
 public.directory_items  | 456 MB
 public.topic_views      | 446 MB
 public.posts            | 380 MB
 public.post_search_data | 298 MB
 public.notifications    | 170 MB
 backup.topic_views      | 156 MB
 public.post_actions     | 155 MB
 public.user_histories   | 134 MB
 backup.directory_items  | 123 MB
 public.user_auth_tokens | 110 MB
 public.users            | 91 MB
 backup.user_auth_tokens | 87 MB
 public.user_visits      | 77 MB
 backup.posts            | 68 MB
 backup.post_timings     | 65 MB
 backup.post_search_data | 64 MB
 public.optimized_images | 63 MB
1 Like

Did you do an upgrade recently?

I’ve seen a commit about cleaning the email logs table. After the rebuild, it may be lighter

Also, you might check the setting delete email logs after days, it should be safer than deleting it manually

3 Likes

Thank you very much. I found this option and changed it from 90 days to 10 days.

What about user_actions ? what is stored in this table that it has got so big? I have the same question about post_timings and topic_views too. The names indicate that these should be just a bunch of numbers. And shouldn’t take really this much space.

1 Like

You can run a backup, download it, and analyze it locally. A pg_dump is just a text file that is human readable and will let you check what exactly is in each table.

1 Like

I followed your suggestion and downloaded and extracted the backup. it was about 2GB. Is it normal that its 1/4 of what discourse reports?

Btw I realized that a huge amount of data is for excessive number of inactive users. Its more than 100k. Is there an automatic way for deleting all these users? they don’t have any post or other things that might break the process.

If there is no automatic way, If I remove them with api call, does it also clear all information related to them from database?

2GB was the size of the compressed backup? Also backups don’t include indexes, and they take a lot of space.

An inactive user without posts or likes is just a single line in the users table. Are you sure all the space is coming from inactive users?

Did your forum came in from an import? Maybe the import created some bad data?

2 Likes

No. Its the size of extracted backup. The size of compressed backup is about 300MB

backup.topic_views

I looks like you did a restore recently and the data before the restore is backup-ed in the backup schema. If you’re certain that you no longer need to recover back to the previous state, you can drop the schema by running the following commands.

./launcher enter app
su - postgres
psql discourse
ALTER SCHEMA "backup" TO "backup-moved";
# Check that you site is still working and up to date
DROP SCHEMA "backup-moved" CASCADE;
6 Likes

Thought of reviving this topic, since it’s the same issue.

Our discourse postgres_data folder is 75GB big. Which I think is a lot. According to the admin panel, a backup is around 10.5GB and the uploads take about 9.3GB.

I’ve checked which tables were taking more space and this is what I get:

 public.posts                | 51 GB
 public.post_search_data     | 9769 MB
 public.post_timings         | 3997 MB
 public.user_actions         | 2144 MB
 public.post_custom_fields   | 1039 MB
 public.topics               | 676 MB
 public.post_stats           | 663 MB
 public.post_replies         | 643 MB
 public.quoted_posts         | 523 MB
 public.user_visits          | 476 MB
 public.top_topics           | 403 MB
 public.user_auth_token_logs | 364 MB
 public.topic_links          | 353 MB
 public.topic_users          | 335 MB
 public.topic_views          | 301 MB
 public.user_histories       | 220 MB
 public.users                | 209 MB
 public.stylesheet_cache     | 194 MB
 public.directory_items      | 143 MB
 public.notifications        | 139 MB

I wonder if it’s normal to have a public.posts table taking so much space (51GB), we’re talking about a 6M post forum, which I don’t see as something extraordinary.

Is this normal?

3 Likes

Completing the above info with rake db:stats:

table_name | row_estimate | table_size | index_size | total_size

posts | 8847417 | 39 GB | 12 GB | 51 GB
post_search_data | 5880635 | 8377 MB | 1392 MB | 9769 MB
post_timings | 23728606 | 1571 MB | 2430 MB | 4001 MB
user_actions | 5424982 | 488 MB | 1657 MB | 2144 MB
post_custom_fields | 5832468 | 429 MB | 609 MB | 1039 MB

I’ve seen other examples where 10 million rows for posts table translates in just about 15Gb. We have now 8 million with 39 Gb in size.

Is there a way to optimize this?

There are lots of factors in play here. For example, long posts take more space than short ones. As a fellow lusófono I know how crazy verbose our language can be. I see your data comes from an import. Some artifacts, like posts with 5 level-deep quotes aren’t common in Discourse but are found in your site because of the import. Our language also matters as a ç takes double the space of a s.

I do believe we don’t change the PostgreSQL default, and the posts.raw column goes to TOAST and is compressed.

5 Likes

Thanks, that worked. I think that line might be missing RENAME though. I did it like this:

ALTER SCHEMA "backup" RENAME TO "backup-moved";
2 Likes