Today I was evaluating my discourse server and found two weird things.
1- If I understand correctly, the discourse postgres_data folder which is in /var/discourse/shared/standalone folder, is where discourse stores the database. Now this folder for my forum is about 8GB. Which I believe is too big for a humble forum. Can someone explain why its too big?
2- I have another folder named postgres_data_old that is also about 7GB. What is this for?
Also, my server memory was about 4GB. I found its mostly consumed. So I upgraded it to 8GB. Again I think a humble forum shouldn’t need that much of memory.
I’ll let a specialist give you an answer about that. I know that Discourse stores a lot of information to provide relevent statistics and a good search engine I guess. It may not be alarming.
You can run the following commands to see which tables are taking up the most disk space
./launcher enter app
su - postgres
psql discourse
SELECT nspname || '.' || relname AS "relation",
pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND C.relkind <> 'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size(C.oid) DESC
LIMIT 20;
This is the output of my forum for these commands. I think the first 5 rows are consuming too much space. I cant imagine why user_actions should be around 2GB. Or post timing about 1GB. Can you give me an idea what could be wrong?
On the other hand, is there any way to clean unnecessary data? For example maybe I can get rid of most of email_logs. I don’t send too many emails. I don’t know why this is so big.
Thank you very much. I found this option and changed it from 90 days to 10 days.
What about user_actions ? what is stored in this table that it has got so big? I have the same question about post_timings and topic_views too. The names indicate that these should be just a bunch of numbers. And shouldn’t take really this much space.
You can run a backup, download it, and analyze it locally. A pg_dump is just a text file that is human readable and will let you check what exactly is in each table.
I followed your suggestion and downloaded and extracted the backup. it was about 2GB. Is it normal that its 1/4 of what discourse reports?
Btw I realized that a huge amount of data is for excessive number of inactive users. Its more than 100k. Is there an automatic way for deleting all these users? they don’t have any post or other things that might break the process.
If there is no automatic way, If I remove them with api call, does it also clear all information related to them from database?
I looks like you did a restore recently and the data before the restore is backup-ed in the backup schema. If you’re certain that you no longer need to recover back to the previous state, you can drop the schema by running the following commands.
./launcher enter app
su - postgres
psql discourse
ALTER SCHEMA "backup" TO "backup-moved";
# Check that you site is still working and up to date
DROP SCHEMA "backup-moved" CASCADE;
Thought of reviving this topic, since it’s the same issue.
Our discourse postgres_data folder is 75GB big. Which I think is a lot. According to the admin panel, a backup is around 10.5GB and the uploads take about 9.3GB.
I’ve checked which tables were taking more space and this is what I get:
I wonder if it’s normal to have a public.posts table taking so much space (51GB), we’re talking about a 6M post forum, which I don’t see as something extraordinary.
There are lots of factors in play here. For example, long posts take more space than short ones. As a fellow lusófono I know how crazy verbose our language can be. I see your data comes from an import. Some artifacts, like posts with 5 level-deep quotes aren’t common in Discourse but are found in your site because of the import. Our language also matters as a ç takes double the space of a s.
I do believe we don’t change the PostgreSQL default, and the posts.raw column goes to TOAST and is compressed.