2.6.0 beta 3 update failed on disk and/or memory space

You can try to find out, for example at your root shell prompt:
du -kx / | sort -n | tail -33
and perhaps also
find / -xdev -size +1000k -ls | sort -n -k 2 | tail


OK, du command (=disk usage?) reports

root@nz:/var/discourse# du -kx / | sort -n | tail -33
747432  /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5/diff/var/www/discourse/vendor/bundle
750792  /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5/diff/var/www/discourse/vendor
819620  /var/log/journal/8bebc832e1a692c83690ffe65e1256e3
860600  /var/log/journal
926492  /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354/diff/var/www/discourse
926504  /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354/diff/var/www
998192  /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354/diff/var
1059968 /var/log
1202004 /var/discourse/shared/standalone/import/data
1237720 /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354/diff/usr
1307816 /var/discourse/shared/standalone/import
1399332 /var/discourse/shared/standalone/backups/default
1399336 /var/discourse/shared/standalone/backups
1636032 /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5/diff/var/www/discourse
1636040 /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5/diff/var/www
1678944 /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5/diff/var
1692708 /usr
1820540 /var/lib/docker/volumes
2235084 /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5/diff
2235104 /var/lib/docker/overlay2/794d92242be8354a3ba468161d1d2ea416ab43a6f70ddd7ab060384bd5a9fcd5
2285628 /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354/diff
2285644 /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354
2437984 /var/discourse/shared/standalone/postgres_data/base/16384
2461228 /var/discourse/shared/standalone/postgres_data/base
2545304 /var/discourse/shared/standalone/postgres_data
5184184 /var/lib/docker/overlay2
5692972 /var/discourse/shared/standalone
5693056 /var/discourse/shared
5695776 /var/discourse
7076692 /var/lib/docker
7446508 /var/lib
14417296        /var
18614840        /

18.6 GB?

The find command gives
    root@nz:/var/discourse# find / -xdev -si                                                                      ze +1000k -ls | sort -n -k 2 | tail
       524219  217044 -rw-------   1 lxd      mlocate          222248960 May 22 03:39 /var/discourse/shared/standalone/postgres_data/base/16384/20379
       523733  234132 -rw-------   1 lxd      mlocate          239747072 May 23 01:29 /var/discourse/shared/standalone/postgres_data/base/16384/19615
       610933  244612 -rw-------   1 lxd      mlocate          250478592 Sep 25 03:37 /var/discourse/shared/standalone/postgres_data/base/16384/153088
       610816  248180 -rw-------   1 lxd      mlocate          254132224 Sep 26 03:59 /var/discourse/shared/standalone/postgres_data/base/16384/152767
      2596079  307188 -r--r--r--   1 ReadyNAS ReadyNAS  314556297 Apr 29 20:50 /var/lib/docker/overlay2/fadc3460a6249ba37efa266d885f5ea3d6f6de8fee0e9b056c5376599bb7f354/diff/var/www/discourse/.git/objects/pack/pack-ffd1b8da21b9e26b4475a3fef6537a89f21989d6.pack
      1548730  464076 -rw-r--r--   1 ReadyNAS www-data         475209681 Sep 24 03:31 /var/discourse/shared/standalone/backups/default/nz-architecture-2020-09-24-033014-v20200820232017.tar.gz
      1548729  467616 -rw-r--r--   1 ReadyNAS www-data         478832984 Sep 25 03:38 /var/discourse/shared/standalone/backups/default/nz-architecture-2020-09-25-033723-v20200820232017.tar.gz
      1548294  467636 -rw-r--r--   1 ReadyNAS www-data         478853932 Sep 26 03:54 /var/discourse/shared/standalone/backups/default/nz-architecture-2020-09-26-035209-v20200820232017.tar.gz
       794198  627936 -rw-r--r--   1 ReadyNAS ReadyNAS         643002368 Mar 28  2020 /var/discourse/shared/standalone/import/data/index.db
        60454 2097156 -rw-------   1 root     root     2147483648 Jan  4  2020 /swapfile

Does anything stand out as unexpectedly large ?
I have no way to tell what is normal, and what if anything can be safely deleted…

(the ReadyNAS files are forum backups, generated by Discourse)

1 Like

Database is 2.5GB. Backups combined are 1.3GB.

You’ve got several large Docker layers - are you sure that ./launcher cleanup actually completed?


./launcher cleanup gives following result for me, almost instantaneously - not sure what I am missing here…

root@nz:/var/discourse# ./launcher cleanup
WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
1 Like

That large import area might be suspicious. But it’s only 640M.

There are some tips here about seeing what docker is using, and trying to reduce it. (It’s possible some of the advice is wrong or dangerous.) I saw this in my rather small world:

# docker volume ls -qf dangling=true
# docker images -a
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
local_discourse/app   latest              33ce56b34841        3 months ago        2.59GB
<none>                <none>              991acdba0b1f        4 months ago        2.22GB
# docker system df
TYPE                TOTAL               ACTIVE              SIZE                RECLAIMABLE
Images              1                   1                   2.591GB             0B (0%)
Containers          1                   1                   920.9MB             0B (0%)
Local Volumes       0                   0                   0B                  0B
Build Cache         0                   0                   0B                  0B

Did you see this advice about vacuuming your database and rebuilding indexes after the recent postgresql update?


Thanks Ed - both those things helped

Most posts in that thread did not help, but one suggested docker system prune --all --volumes --force which cleaned more than 1GB

That procedure reclaimed 2.3GB - enough to give me the minimum 5GB

I then repeated the first two commands of

But the process seemed to fill things back up again, because at the third command I got

root@nz:/var/discourse# ./launcher rebuild app
You have less than 5GB of free space on the disk where /var/lib/docker is located. You will need more space to continue
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        25G   22G  3.0G  88% /
Would you like to attempt to recover space by cleaning docker images and containers in the system?(y/N)y
If the cleanup was successful, you may try again now

At that point I tried ./launcher cleanup again, but got

root@nz:/var/discourse# /launcher cleanup
-bash: /launcher: No such file or directory

(That should be ./launcher not /launcher)


Ah - thanks Ed! That got back 2.6GB - just enough to let installation complete.

Many thanks to everyone for the helpful suggestions.

It still worries me a bit that I am apparently using so much space, at least as reported with Ubuntu - because future forum updates may presumably hit the same issue.

I note within my Discourse installation, free disk space available for storing backups is reported as 9 GB - a major discrepancy. What could lie behind that?

** Edit ** - retrying du -kx / | sort -n | tail -33 I see Discourse is correct - my disk usage has shrunk to 15GB as a result of all of the above

1 Like

if you go to yourdomain.com/admin/backups

How many backups do you have there? maybe you can keep one and delete the rest.


Only 3 backups now @ 457 MB each - but looks like disk space headroom problem has resolved through the various clean ups… for now at least.

I read in passing something about dockers accumulating ‘difference data’ and logs at a ferocious rate, and a lot of people reporting similar issues with disks filling themselves up

1 Like

I would agree that this following message is worrying, and it’s odd that discourse believes it is fully updated:


I wonder, if in addition to disk space issues, the upgrade process exceeded available memory (1GB) allocated to the droplet? You can see in my console screenshot above a reference to ‘Out of memory’ as the first item logged after a ./launcher rebuild app

What I did not mention, is that after that attempt, console stopped responding (though at that point I was using the web based console in my Digital Ocean Control Panel, which is always flaky) and I then power cycled the droplet. (Thereafter I used PuTTY)

Either way, no, not great that update was reported as a success via Discourse upgrade page after presumably hitting the same memory and/or disk issue.


Ah, the OOM killer ran. That’s certainly not good. Normally I would recommend increasing swap space. You can see current usage with swapon, in my case

# swapon 
/swapfile file   2G   3M   -2

Also free

# free
              total        used        free      shared  buff/cache   available
Mem:        1992060      792904       80148       34696     1119008     1004956
Swap:       2097148        3084     2094064

It would be bad if your 2G swapfile was not in play. It’s bad that you can’t add swap without using disk space!

One way to improve disk space for an upgrade is to copy all the backup files offsite, check their integrity, then delete them from your server. You absolutely need a good recent backup somewhere safe over an upgrade, just in case, but it need not be on the server itself. I would feel comfortable deleting all but the latest, but would certainly take an offline copy.

It would be good to see the du results again, now you’ve done all the cleanups.


I wonder: is the 1G your RAM allocation and the 25G your disk allocation? Two very different things.

Edit: the supported standard story is, I think, to have rather more than 1G of RAM.
Edit: no, apparently 1G is still the recommended absolute minimum.


I just connected again , and system info reported on launch of console window is

System load:  0.01               Processes:              136
Usage of /:   59.4% of 24.06GB   Users logged in:        0
Memory usage: 73%                IP address for eth0:
Swap usage:   17%                IP address for docker0:

So swap space of 17% = 4GB?
With no one logged in to the forum, and only the current PuTTY connection to droplet active, RAM is 73% full - so it doesn’t look like it would take much activity to tip the forum over into swap space - and if that comes out of the 24GB then perhaps that creates the perfect storm during an update with disk space usage already running high?

du -kx / | sort -n | tail -33 now gives me

root@nz:~# du -kx / | sort -n | tail -33
505512  /usr/bin
528784  /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var/www/discourse/vendor/bundle/ruby/2.6.0
528788  /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var/www/discourse/vendor/bundle/ruby
528792  /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var/www/discourse/vendor/bundle
536848  /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var/www/discourse/vendor
548952  /var/lib/docker/overlay2/c126267f944d8d7f12415ac4f5908eba8a6a686b093cad3e0115eded8edfd6ba/diff
548968  /var/lib/docker/overlay2/c126267f944d8d7f12415ac4f5908eba8a6a686b093cad3e0115eded8edfd6ba
817700  /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/usr/lib
827812  /var/log/journal/8bebc832e1a692c83690ffe65e1256e3
868792  /var/log/journal
1069356 /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var/www/discourse
1069368 /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var/www
1069396 /var/log
1142352 /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/var
1202004 /var/discourse/shared/standalone/import/data
1307816 /var/discourse/shared/standalone/import
1362804 /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff/usr
1399332 /var/discourse/shared/standalone/backups/default
1399336 /var/discourse/shared/standalone/backups
1709408 /usr
2438224 /var/discourse/shared/standalone/postgres_data/base/16583
2462944 /var/discourse/shared/standalone/postgres_data/base
2481288 /var/discourse/shared/standalone/postgres_data
2540188 /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35/diff
2540204 /var/lib/docker/overlay2/3b68a713bd8e9a7f3b2a69ba8084a770b796e555e887ce4f66698d3894430c35
3387776 /var/lib/docker/overlay2
3460136 /var/lib/docker
3830584 /var/lib
5629420 /var/discourse/shared/standalone
5629504 /var/discourse/shared
5632224 /var/discourse
10747244        /var
14961492        /
1 Like

I think you can improve this with journalctl maybe with

# journalctl --vacuum-size=50M

(which you might do immediately before trying an upgrade)

Interesting that the postgresql usage hasn’t gone down.

free will show you the swap usage: it’s 17% used, of some amount, probably 2G.

It’s clear that your machine is a little uncomfortably small: you need more RAM or more swap, and you can’t practically have much more swap without getting more disk.

1 Like

Apologies - you are quite right. the 1GB was RAM, not disk space used.

1 Like

Quite right again

root@nz:~# free
              total        used        free      shared  buff/cache   available
Mem:        1008828      655660       61716      102288      291452       96576
Swap:       2097148      459776     1637372

I wonder if the upgrade process should assess the host system for capacity to implement the upgrade just before it starts?


I think it’s somewhat in the category of predicting the future! The check for 5G disk space is clearly helpful, but won’t be watertight. Free RAM is more difficult, it’s quite slippery as to how much will be needed. It will be a function, I would think, of how big the forum has become, and perhaps also of what needs to be touched during each upgrade.

I’m careful to minimise costs, so I will spend time trying to squeeze into a cheap server. But eventually, as a forum grows, it will surely be worth moving up to the next tier. And that will cost money, but save time and effort.