We created a repository called RPS Discourse Image Builder is used to create a Discourse OCI image. I think this might be helpful to some people looking here. We mainly did this so we don’t need to wait for the discourse deploy which takes long and to be able to version pin discourse reliably.
We wrote a script that uses the discourse_docker repo to build the image with the stable version in the most general way possible.
There is also a docker compose file that brings up the databases required for building and the discourse image for testing and runs it for local development. This can run on a GitLab runner with the shell executor, but you need a system with a docker-compose version that supports profiles.
Approach
The build script should be run within our CI/CD pipelines so we can easily update the version so further adjustments are done in different repos with Dockerfiles that are based on the image created by this repo.
Background
Discourse is a very nice piece of software, but it is not very easy to deploy and to version pin. This repository is used to create a docker image that can be used to deploy discourse.
The problem is that discourse has a very unique way of building their docker images. The Discourse developers expect you to build the image on the target machine with their discourse_docker repo.
There have been a lengthy discussions about this on the forum, TL;DR: The discourse lead developers refuse to support a public docker image that can be used to deploy discourse. They want to keep the discourse_docker repo as the only official way to deploy discourse.
The main disadvatages for this approach are:
It is not possible, from our experience, to version pin discourse reliably.
It takes a long time to build the image and when the image is built the service is not available. Also discourse has the longest DevOps iteration cycle of all the services we support.
Databases are managed in a different way than for usual docker based projects.
The official Discourse deployment story is incompatible with OCI based development workflows and deployments, like Kubernetes or even Docker-Compose.
The discourse_docker launcher makes a lot of assumptions about the environment it is running in. For example it refuses to run on rootless Podman with an error about missing storage volumes without being able to bypass this with an argument or env var.
True, we try to make it friendly for webmasters from the “drag and drop this zip file contents with FileZilla via FTP”, while ensuring that everyone runs recent, supported and patched versions of all the software in the stack, even their Databases.
Yes, the launcher based flow is not compatible out-of-the-box with container orchestration. That said, it can be made compatible by running ./launcher bootstrap app and pushing the resulting image to a container registry and then running said image via orchestration.
We welcome a pull request to make this possible, as it sounds useful generally. pr-welcome
That’s because Discourse 2.8 is unsupported now, which means we didn’t back port newest Ruby to it, and it runs on a Ruby version that’s already EOL. No one should run it in production.
You literally reference the older base image (created approximately around the date of the release you are targeting - it obviously must be at least later)
But if you try to pin to a version that is too old, it may not work with updated versions of the Discourse base image. I can’t remember specific examples, but stuff like and old version of Discourse won’t work with Ruby 3.2, so you (sometimes) also need to pin discourse_docker if you pin to an old version of Discourse.
The safest and, in most cases, least wasteful solution is to build an image and push it to a repo rather than building a new image at every deployment. And if you have plugins, you’re likely to need to pin each of those as well.
I’ve done this for several clients for ECS as well as k8s on GCP and AWS.
I’m pretty sure that you can if you also pin discourse_docker to the old version. As I mentioned above, it’s more complicated with plugins, as you’re likely to need to pin those to the old versions as well. If you really want to maintain old versions building them exactly once and pushing them to a repo is the way to go. I’m doing this with a client to test upgrade paths from current production to latest and it’s working smoothly.