At this point I would save myself trouble and do the work to resize now.
I pruned everything except unused volumes because docker volume ls
showed that we only have one.
did you try?
./launcher cleanup
Yup!
I did dig a little more into our docker images though with docker images -a
, and see this.
What’s going on with <none>
?
Some googling took me to this very interesting comment.
It is important to understand why you have intermediate untagged images showing as
<none> <none>
in order to avoid them since, as you have seen, you can’t remove them if they are in use.The reason untagged images happen is because you built an image, then you changed the
Dockerfile
and built that image again and it reused some of the layers from the previous build. Now you have an untagged image which cannot be deleted because some of it’s layers are being used by a new version of that image.The solution is to:
- Delete the new version of the image
- Delete the untagged image and
- Rebuild the new version of the image so that it owns all of the layers.
You will be left with a single tagged image that contains all of the layers of the previous untagged images and the new image.
~jr
I wasn’t expecting to find 2.64 GB in a docker image, so now I’m trying to figure out what’s happening there. If I don’t need this image at all, then we are definitely far from needing to resize.
Did you do a
./launcher cleanup
But I recommend that you resize. I’m surprised that you’ve made it this long with 25gb.
Also, did you look in shared/backups/default
I would definitely not trust digital ocean’s backups as a means to backup your forum.
How long? I don’t see any hint of that - I do know that I’m happily running one forum on 20G and another on 25G.
Under shared you might well have a lot of backup data (perhaps in shared/standalone/backups/default). You might also possibly have old database copies, or old log files. I’d recommend you run
du -kx / | sort -n | tail -49
or similar.
It’s fair to note that you can save time, at the expense of money, by moving to a larger instance. Or you can make the opposite tradeoff.
This worries me a bit. DO might well help you with backups of your whole system, but if it were me, I’d be happier to know how to take Discourse backups and how to get a safe local copy. And how to prune the backups. (If by some misfortune DO deleted your instance and your account, you’d want your data to survive that.)
We use the Discourse backup functionality, too, and I realized that we hadn’t cleared the old backups there.
Well, I deleted all but the most recent backup using the Discourse interface and also downloaded the newest backup to my local drive. That brings me to less than 100 MB away from having enough space.
Here’s what I get when I run that command in var/discourse
656876 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var/www/discourse/vendor
819624 /var/log/journal/e734ad1931dbee4740881cc15c9e7a9a
826292 /var/discourse/shared/standalone
826296 /var/discourse/shared
831476 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse/.cache/yarn/v6
831484 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse/.cache/yarn
831492 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse/.cache
832188 /var/discourse
845992 /lib/modules
850136 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home/discourse
850144 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/home
898764 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91/diff/usr/lib
966656 /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff/var/www/discourse
966660 /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff/var/www
966664 /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff/var
991800 /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765/diff
991816 /var/lib/docker/overlay2/21f4d6109bd809c584ae84f9f7c50286c6126176f86a2ef61c4c24ce1e633765
994980 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6/diff/usr/lib
1089092 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff/var/www/discourse
1089096 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff/var/www
1130168 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff/var
1177644 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49/diff
1177660 /var/lib/docker/overlay2/9817d45d2728572ad6dc4d62df5944dfad69c35b76753ceb260e0130863ece49
1224436 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var/www/discourse
1224440 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var/www
1224444 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff/var
1234612 /lib
1248080 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d/diff
1248096 /var/lib/docker/overlay2/81fd81f27d0d8fe795f510fe8d70c4ecad96405b0e1dbb57f0440fe9c398a30d
1342320 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91/diff/usr
1516440 /usr
1543656 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/var/www/discourse
1543664 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/var/www
1558580 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff/var
1659548 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91/diff
1659564 /var/lib/docker/overlay2/58e9df9d9e2e10efb3dcf68771edd172664f8d91e3aa2e0b280fd4549bfd2a91
2040472 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6/diff/usr
2171304 /var/log/journal/d893af269dfb5f73239a5b6761d49ea0
2388612 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6/diff
2388628 /var/lib/docker/overlay2/2749f8a24b3e28af399b256ecab7f2db0cb146939a0ef56e83858a0e696c3df6
2461904 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058/diff
2461924 /var/lib/docker/overlay2/7bedcd4746ebce6e3fe7bbb5ec2c987a1c046efc715fad1e53201b18b97b6058
3064672 /var/log/journal
3276268 /var/log
10107180 /var/lib/docker/overlay2
10131984 /var/lib/docker
10396840 /var/lib
14869684 /var
20007992 /
I got additional instructions on how to handle the following:
The details are:
1
The command to remove an image is:
docker rmi {image_name}
Where
{image_name}
is the name of the image you want to delete. You can also use the image ID to delete the image (e.g.,docker rmi {image_id}
). This is what you will need to use to delete an image with a name of<none>
.For example, Let’s say you have the following images:
REPOSITORY TAG IMAGE ID CREATED SIZE my-new-image latest c18f86ab8daa 12 seconds ago 393MB <none> <none> b1ee72ab84ae About a minute ago 393MB my-image latest f5a5f24881c3 2 minutes ago 393MB
It is possible that the
<none>
image cannot be deleted because themy-new-image
is using some layers from it. What you need to do is:docker rmi my-new-image:latest docker rmi b1ee72ab84ae docker built -t my-new-image .
What that does is remove
my-new-image:latest
which is reusing layers from the<none>
image. It then deletes the<none>
image using it’s image IDb1ee72ab84ae
. Finally it rebuildsmy-new-image
creating all of the layers that are needed.Also check to make sure that you don’t have stopped containers that are still using the
<none>
“untagged” image. Usedocker ps -a
to see all image including ones that have exited. If so, usedocker rm {container_id}
to remove the container and then try and remove the<none>
image again.
What do you all think?
I think you can improve things here:
See this earlier exchange:
and
This did the trick and I changed the policy as well!
I still want to track down the issue with the <none>
image (since it’s ridiculous that it’s taking 2GB+ space), but you solved my most immediate problem of creating enough space to upgrade! Thank you!!
Absolutely true! For now I’m having a lot of fun learning new things, so the time is worth it.
It’s possible that once you manage to do the upgrade, those older images will be unused and will (eventually?) get removed.
(Glad I could help!)
You’re right. That’s exactly what happened. We now have 12 GB of available space Again, my deep appreciation!
Since you are in Digital Ocean, it’s good to know that you can move /var/discourse/shared to its own Volume that you can resize.
If you do not put a partition table on the new device, but just format it directly with ext4, it gives you flexibility after the first downtime you take to move to it. When you next run low, you can add more space to the device while you instance is still running, run resize2fs
on the mounted filesystem from within the instance, and you’ll immediately have more storage space.
I hesitate a little to give specific instructions because it might seem like I’m going to provide support for them, but I will anyway, with the caveat that I can’t actually provide support for these instructions. Please please please take a backup, offsite, and know how to restore it on your own before trying any of this. I’m just sharing here what I actually did to handle this case…
Manage Volumes, then add a volume of whatever size you need. It will look something like this in the console:
Then inside the instance it will be something like:
# ls -l /dev/disk/by-id/
total 0
lrwxrwxrwx 1 root root 9 Nov 22 19:29 scsi-0DO_Volume_var-discourse-shared -> ../../sda
Modify the rest of this to use the name that you found there. If in doubt, stop and don’t make any mistakes.
# mkfs.ext4 -L var-discourse-shared -M /var/discourse/shared /dev/disk/by-id/scsi-0DO_Volume_var-discourse-shared
echo '/dev/disk/by-id/scsi-0DO_Volume_var-discourse-shared /var/discourse/shared ext4 defaults,nofail,discard 0 0' >> /etc/fstab
# cd /var/discourse
# ./launcher stop app
# mv shared shared-old
# mkdir shared
# mount shared
# tar -C shared-old -c . | tar -C shared -x -S -p
# ./launcher start app
After you confirm that the site is working, then:
# rm -rf shared-old
After this, the system volume will be used for docker images and the operating system, but the new volume will be used for all your Discourse contents. As long as you ./launcher cleanup
after each update, you should be in good shape going forward.
Digital Ocean has instructions for how to increase the size of a volume, including the filesystem on the instance (choose the “ext4” tab if you followed my instructions above):
Obvious point, but worth noting I think, this carries a monthly cost. It’s fairly modest, I think at time of writing it’s $0.10 per GiB per month.
Thanks! “Explicit is better than implicit” applies. In the context of resizing a droplet only because of needing more storage, it can be cheaper than moving to a larger droplet, if you don’t need the extra CPU and/or memory of a larger droplet.
If you need the extra CPU and/or memory, then you can expand your existing droplet.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.