为何我有 5.8G 空间,但重建时提示空间不足

在运行 ./launcher rebuild app 之前,我有 5.8G 的可用空间。

但在重建应用时,剩余空间会降至 3.7G(小于 5G),导致重建停止。

这是否意味着重建前至少需要保留 7G 的可用空间?


我有一台 20G 的 VPS,Discourse 的备份文件为 611M,不足 1G。

但这仍然不够?这是为什么?

我是否应该直接清理整个 VPS,重新安装 Discourse,然后上传备份文件并让 Discourse 服务器重新运行?


你看,我总共有 20G 空间,每次重建前都需要 7G 的可用空间。服务器备份压缩包为 611M,如果解压:

local_discourse/app 大约需要 3G

root@xxx:/var/discourse# docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
local_discourse/app   latest              98249b7dffc9        11 days ago         2.58GB
rethinkdb             latest              be24926bde9a        20 months ago       183MB
microbox/etcd         latest              6aef84b9ec5a        3 years ago         17.9MB

Ubuntu 系统大约会占用 2G?

这是否意味着一个仅 611M 的 Discourse 就需要超过 20G 的 VPS?

我觉得这非常浪费空间。


Docker 版本:17.09.0-ce
已安装版本:v2.3.0.beta2 +31
最新版本:2.3.0.beta2

Which version are you upgrading from? Backups are compressed, some upgrades include an upgrade to Postgres which uses significantly more disk space than usual.

Some rebuilds need more space that others. When there is a PostgreSQL update you need double the uncompressed database size as free space. When we release a new docker image you need space for the new image.

That as temporary and reclaimed using the launcher cleanup command afterwards.

We just released a new image so you need enough space.

In my experience even 25gb is a bit tight.

Very Strange, my usually used discourse’s backup file (20G KVM ramnode), only 670M, and the server don’t have enough space to rebuild.

While another discourse of mine, not usually using, also on a 20G KVM ramnode, with backup file 990M, but still have enouth space to reubild.

So I checked the space again, and I find there being a big ./log folder. So, is it safe to delete the log files there in the big folder?

root@xxx:/var# du -h --max-depth=1
3.6G    ./log
20K     ./spool
4.0K    ./opt
4.0K    ./tmp
1.6M    ./backups
1.8G    ./discourse
188M    ./cache
4.0K    ./local
4.0K    ./mail
7.1G    ./lib

Try a

./launcher cleanup

That will remove used docker images.

But I consider 25 a minimum and even at the requires careful monitoring of disk storage.

I have already tried that.

I’m considering to move to another VPS which have 40G space, when this contract is over.

While today I suddenly find the one that I usually don’t use, it’s backup file is 990M, far more than this one I usually use, whose backup file is 670M.

So I think I can delete something and continue working on this server for another year. I don’t want to “zheteng”.

You might trim the logs more aggressively.

The size of a backup means little as the database is compressed.

so, how should I trim the logs?

You can Google that. I think logrotate is what to I for. A quick manual temporary solution is to enter the directories with the logs and

rm *

The backup size is immaterial, please stop citing it as a factor here.

Backups are gzipped and include attachments, the smaller database could have 50Mb of attachments while the larger has 500Mb.

You’ve also not mentioned whether they’re at the same software levels. The amount of disk required is directly related to the operations which need to be performed. If either requires a postgres upgrade it will use significantly more disk.

Can you not upgrade the disk during your current contract? Most hosts allow this.

I found the true answer.

The discourse won’t consume this much space, unlike most of the answers stated.

I’ve bought another VPS which have 40G of space and moved the system to that server.I found it only needs 9G rather than the previous 14G, so the previous 20G server is actually quite enough for this forum.

That is because the old server has lasted for 3 years or more and a lot of packages that you never need any more or other files that you will not need any more are still there. So the best choice is just download the backup file, reinstall the Ubuntu system of that server and reinstall discourse and restore the backup file again. The process takes about 2 hours I think, depends on the net speed.