My rebuild failed due to lack of disk space so I need to free up some space. But I’m stuck in a loop: ./launcher cleanup frees up sufficient space to be above 5 GB. So I start a rebuild but that then fills up the recovered space again so that it fails to complete. See below.
How do I get things running again?
$ sudo ./launcher cleanup
The following command will
- Delete all docker images for old containers
- Delete all stopped and orphan containers
Are you sure (Y/n):
Starting Cleanup (bytes free 3931580)
Finished Cleanup (bytes free 5903356)
$ sudo ./launcher rebuild app
WARNING: Docker version 17.05.0-ce deprecated, recommend upgrade to 17.06.2 or newer.
WARNING: We are about to start downloading the Discourse base image
This process may take anywhere between a few minutes to an hour, depending on your network speed
Please be patient
Unable to find image 'discourse/base:2.0.20180802' locally
2.0.20180802: Pulling from discourse/base
8ee29e426c26: Pulling fs layer
6e83b260b73b: Pulling fs layer
e26b65fd1143: Pulling fs layer
40dca07f8222: Pulling fs layer
b420ae9e10b3: Pulling fs layer
b89ccfe9dadc: Pulling fs layer
40dca07f8222: Waiting
b420ae9e10b3: Waiting
b89ccfe9dadc: Waiting
e26b65fd1143: Verifying Checksum
e26b65fd1143: Download complete
6e83b260b73b: Verifying Checksum
6e83b260b73b: Download complete
b420ae9e10b3: Verifying Checksum
b420ae9e10b3: Download complete
40dca07f8222: Verifying Checksum
40dca07f8222: Download complete
8ee29e426c26: Verifying Checksum
8ee29e426c26: Download complete
8ee29e426c26: Pull complete
6e83b260b73b: Pull complete
e26b65fd1143: Pull complete
40dca07f8222: Pull complete
b420ae9e10b3: Pull complete
b89ccfe9dadc: Verifying Checksum
b89ccfe9dadc: Download complete
b89ccfe9dadc: Pull complete
Digest: sha256:be738714169c78e371f93bfa1079f750475b0910567d4f86fa50d6e66910b656
Status: Downloaded newer image for discourse/base:2.0.20180802
You have less than 5GB of free space on the disk where /var/lib/docker is located. You will need more space to continue
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg-lv_root 19G 14G 3.8G 79% /
Would you like to attempt to recover space by cleaning docker images and containers in the system?(y/N)y
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
Are you sure you want to continue? [y/N] y
Total reclaimed space: 0B
If the cleanup was successful, you may try again now
$
Clean up some more space so that the build has enough breathing room to complete. I find that Docker’s cleanup system is not wonderful at purging old images sometimes, so I sometimes have to do a docker images followed by a long docker rmi <ID> <ID> <ID> ....
Anything not in use by a running container is usually safe enough, as far as Discourse is concerned, because it’ll be re-downloaded and/or rebuilt when you do the needful. There’s not a huge pile of images there, though; it’s probably time for you to get a disk upgrade.
Is there any way I can stop it from downloading the latest discourse base image every time I try to rebuild or start the app? I’d like it to just use the old one for now so that I can go to bed…
It only downloads it if it is not on local, we really only download an image once. We only bump the required image once every few months in launcher. There are ways to specify a base image BUT you do not want to do that for a rainbow of reasons.
But when I try to rebuild or start the app, the base images that was presumably deleted by cleanup gets downloaded again and I’m back to where I started.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1fba0860cbc3 local_discourse/web_only "/sbin/boot" 5 months ago Up 29 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp web_only
aa6b422d88ca local_discourse/data "/sbin/boot" 8 months ago Up 29 minutes data
2940a1603151 local_discourse/mail-receiver "/sbin/boot" 8 months ago Up 29 minutes 0.0.0.0:25->25/tcp mail-receiver
What do you mean by that? Will I need a discourse backup? Cause I don’t have one…