Background: As detailed elsewhere, my Discourse installation was hosted by a VPS on which the disk was too small to complete an upgrade. At first I clicked “Upgrade” in the admin control panel. The upgrade failed, and the GUI never worked again. After that, I logged into the console of my VPS and gave the famous ./launcher rebuild app command. That also never ran to completion: I had decisively run out of disk storage. To get more space and stay on-budget, I decided to move my whole setup to a new VPS with a different hosting company. Saving the precious site data was a high priority.
Failures: The two most obvious methods to make a backup did not work:
my original attempt to upgrade broke the web-based GUI, so there was no way to reach the admin control panel and initiate a backup from there; and
trying to get inside the docker container and give it some shell commands didn’t work either. The recommended command for this is /var/discourse/launcher enter app. But, at least in my case, the launcher script would try to rebuild the app before letting me enter it, and rebuilds were consistently failing, so this command never even got me a container, never mind a shell inside it.
Success: I was about to give up when I got a pleasant surprise. Working at the command line of my little VM, I said docker ps and learned that there was an active container named app. And docker has a direct way to get inside a running container: the command is docker exec -it app bash.
Inside the container, I was able to make progress: I issued the command discourse backup, waited a few minutes, and then copied the <backup>.tar.gz file to a safe new location. With a current backup in hand, it was possible to finish migrating my setup to its new home. (There are other threads on these forums showing how to do this.)
The key point here is that the above docker command to enter the container worked, even when the Discourse-specific ./launcher command did not.
Thanks to the inventors and maintainers of this fine product.
During the days when I was trying to get my original setup working, I thought I had done everything possible to reclaim space, certainly including ./launcher cleanup, but also much more … removing old kernels, clearing the apt cache, ditching non-essential software, etc., etc.
After I committed to moving my whole site and put a bunch of time into the process, I wondered if I could have done more … but by then I had lost the drive to investigate further. (cf. “sunk cost fallacy”.) To be specific, the VPS I am just about to abandon had a nominal disk size of 25G. About 19G of that was dedicated to the directory /var/lib/docker/overlay2. And the only docker containers I was running were Discourse and its associated Mail-receiver. Experience suggests that Discourse, powerful though it is, should be able to run with a lot less than 19G on the disk. But internet searches seemed to indicate that making changes inside the overlay2 directory was unsafe, so I felt stopped at that point.
In my fresh new setup, the directory /var/lib/docker/overlay2 occupies 13G. Still enormous.
I chose Discourse to run the forums on my small-scale hobby website in the hopes that it would “just work” – i.e., that it would be super simple to administer without learning a bunch of new things. This seems to be mostly correct, if one has sufficient (excessive?) resources to allocate.
My new plan is to blindly hope that the overlay2 directory does not grow over time and swamp the 50G disk in my new VPS. If you (or anyone else) knows how to keep the size of the docker and Discourse dynamic duo under control, I’d love to hear about it. It would be a nice capstone to the rest of the learning I have done in recent days. Thanks again.
Glad you were able to rescue yourself. I run two small forums, one on 20G storage and the other on 25G. I do have to use quite a lot of time and ingenuity sometimes to keep that working. But also I seem to keep using (and posting about) the same set of tactics. See below.
Discourse development optimises for other things than running on minimal-cost hardware - although it just about manages to continue to work for me in my constrained environment. Long may it continue.
The key to working in small-storage setups is to measure what’s going on - too often I see people guessing at what might be going on. My approach will always start with
For more, perhaps search my posts for prune and journalctl and kernel.