Any advice on how to save space?

We run a self-hosted Discourse site on DigitalOcean and have a 25 GB Disk. I just tried to update our Discourse image and got a you will need more space to continue message. After cleaning the docker image and containers, we’re still 0.4 GB short.

Any advice on how to save space? Both in order to update now and also how to save space in the future. I know we’ll need to resize soon, but it’d be helpful to make it through at least one more Discourse image update.

4 Likes

Do you store backups locally?
If so, maybe getting rid of a couple of old ones could help. 0.4GB (i.e. 400MB) should be manageable

You could also try cleaning up some space on the host system as well.

1 Like

try rebooting your server then try again if you are sure there is enough space.

We use DigitalOcean’s backup functionality. I haven’t seen an option for manually deleting one of our backups.

How would I go about doing this? I’m someone who does not have programming experience but am capable of understanding what to do and why I’m doing after getting some directions.

Tried rebooting and no change.

After running sudo du -h --max-depth 1, these are my results:

Screenshot of results

Try prune unused Docker objects

1 Like

At this point I would save myself trouble and do the work to resize now.

3 Likes

I pruned everything except unused volumes because docker volume ls showed that we only have one.

did you try?

./launcher cleanup

1 Like

Yup!

I did dig a little more into our docker images though with docker images -a, and see this.

What’s going on with <none>?

1 Like

Some googling took me to this very interesting comment.

rofranoJohn Rofrano

Nov '17

It is important to understand why you have intermediate untagged images showing as <none> <none> in order to avoid them since, as you have seen, you can’t remove them if they are in use.

The reason untagged images happen is because you built an image, then you changed the Dockerfile and built that image again and it reused some of the layers from the previous build. Now you have an untagged image which cannot be deleted because some of it’s layers are being used by a new version of that image.

The solution is to:

  • Delete the new version of the image
  • Delete the untagged image and
  • Rebuild the new version of the image so that it owns all of the layers.

You will be left with a single tagged image that contains all of the layers of the previous untagged images and the new image.

~jr

I wasn’t expecting to find 2.64 GB in a docker image, so now I’m trying to figure out what’s happening there. If I don’t need this image at all, then we are definitely far from needing to resize.

Did you do a

./launcher cleanup

But I recommend that you resize. I’m surprised that you’ve made it this long with 25gb.

Also, did you look in shared/backups/default

I would definitely not trust digital ocean’s backups as a means to backup your forum.

2 Likes

How long? I don’t see any hint of that - I do know that I’m happily running one forum on 20G and another on 25G.

Under shared you might well have a lot of backup data (perhaps in shared/standalone/backups/default). You might also possibly have old database copies, or old log files. I’d recommend you run
du -kx / | sort -n | tail -49
or similar.

It’s fair to note that you can save time, at the expense of money, by moving to a larger instance. Or you can make the opposite tradeoff.

This worries me a bit. DO might well help you with backups of your whole system, but if it were me, I’d be happier to know how to take Discourse backups and how to get a safe local copy. And how to prune the backups. (If by some misfortune DO deleted your instance and your account, you’d want your data to survive that.)

:woman_facepalming:t3: We use the Discourse backup functionality, too, and I realized that we hadn’t cleared the old backups there.

Well, I deleted all but the most recent backup using the Discourse interface and also downloaded the newest backup to my local drive. That brings me to less than 100 MB away from having enough space.

Here’s what I get when I run that command in var/discourse

656876  /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var/www/discourse/vendor
819624  /var/log/journal/e734ad1931dbee4740881cc15c9e7a9a
826292  /var/discourse/shared/standalone
826296  /var/discourse/shared
831476  /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse/.cache/yarn/v6
831484  /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse/.cache/yarn
831492  /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse/.cache
832188  /var/discourse
845992  /lib/modules
850136  /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse
850144  /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home
898764  /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91/diff/usr/lib
966656  /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff/var/www/discourse
966660  /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff/var/www
966664  /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff/var
991800  /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff
991816  /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765
994980  /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6/diff/usr/lib
1089092 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff/var/www/discourse
1089096 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff/var/www
1130168 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff/var
1177644 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff
1177660 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49
1224436 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var/www/discourse
1224440 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var/www
1224444 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var
1234612 /lib
1248080 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff
1248096 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d
1342320 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91/diff/usr
1516440 /usr
1543656 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/var/www/discourse
1543664 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/var/www
1558580 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/var
1659548 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91/diff
1659564 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91
2040472 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6/diff/usr
2171304 /var/log/journal/d893af269dfb5f73239a5b6761d49ea0
2388612 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6/diff
2388628 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6
2461904 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff
2461924 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058
3064672 /var/log/journal
3276268 /var/log
10107180        /var/lib/docker/overlay2
10131984        /var/lib/docker
10396840        /var/lib
14869684        /var
20007992        /

I got additional instructions on how to handle the following:

The details are:

rofranoJohn Rofrano

1

20h

The command to remove an image is:

docker rmi {image_name}

Where {image_name} is the name of the image you want to delete. You can also use the image ID to delete the image (e.g., docker rmi {image_id}). This is what you will need to use to delete an image with a name of <none>.

For example, Let’s say you have the following images:

REPOSITORY           TAG        IMAGE ID       CREATED              SIZE
my-new-image         latest     c18f86ab8daa   12 seconds ago       393MB
<none>               <none>     b1ee72ab84ae   About a minute ago   393MB
my-image             latest     f5a5f24881c3   2 minutes ago        393MB

It is possible that the <none> image cannot be deleted because the my-new-image is using some layers from it. What you need to do is:

docker rmi my-new-image:latest
docker rmi b1ee72ab84ae
docker built -t my-new-image .

What that does is remove my-new-image:latest which is reusing layers from the <none> image. It then deletes the <none> image using it’s image ID b1ee72ab84ae. Finally it rebuilds my-new-image creating all of the layers that are needed.

Also check to make sure that you don’t have stopped containers that are still using the <none> “untagged” image. Use docker ps -a to see all image including ones that have exited. If so, use docker rm {container_id} to remove the container and then try and remove the <none> image again.

What do you all think?

I think you can improve things here:

See this earlier exchange:

and

3 Likes

This did the trick and I changed the policy as well!

I still want to track down the issue with the <none> image (since it’s ridiculous that it’s taking 2GB+ space), but you solved my most immediate problem of creating enough space to upgrade! Thank you!!

3 Likes

Absolutely true! For now I’m having a lot of fun learning new things, so the time is worth it.