Can Discourse ship frequent Docker images that do not need to be bootstrapped?

I doubt it. It’s crazy difficult to install docker-compose. Though it gets installed if you install Docker on Mac or Windows, for some reason, though they have a repo that will install Docker, if you want docker-compose, you have to download a shell script on your own. It’s like they don’t want you to use it in Linux. That fact aside, it’s not clear that people necessarily understand docker-compose. I’ve tried several times, and still find it no less obfuscated than the yml file that configures Discourse.

But, really, if there had been a little text that suggested you edit the yml file by hand, would you have been able to move on and follow the Discourse-with-other-web-server instructions?

1 Like

I was trying to fuddle along that way before I came in here in the first place asking questions and trying to make helpful suggestions. The last time I posted about this was some time early last year I believe and basically, the whole problem fell on deaf ears and I got no response at all.

I remember before there was the discourse-setup script there was documentation on how to do it just using the launcher if I recall correctly, forgive me if my recollection is a bit off its been a while. I managed to easily get a multi-container setup running using the launcher. I ended up having to write my own wrapper script to automate some of it so an upgrade would be easier and things like that ( a bit meta I know ) but it all came together rather nicely.

The reason I was making the suggestions was that if this is still a problem one year later it might be ideal to offload some of the documentation work.

There should maybe be a from Zero to Hero kind of guide written into the Advanced guide or something along those lines which might help to get a working base that isn’t the standalone model only and also one that is.

1 Like

@mpalmer, if suitable, please merge the following thread here. I was thinking of posting here in the first place, but I thought that this thread is already loaded with problem descriptions and justifications, so a potential solution perhaps would be better suited in a separate thread.

How does any of this address:

I would add to this:

  1. cd /var/discourse && ./launcher rebuild app is a scourge for our free installers hate. We want a web UI for that.

How does any of the multi FROM image build fanciness move the needle for 2 or 3 or 4?

It simply does not.

It allows us to cut out a few shell scripts, possibly.

If anyone really wants to help here, there are specifics that can be worked on.

A non-brainer that keeps popping up OVER AND OVER AND OVER again is :rage: yaml the worst config format in the world, hated by humans ever since it was invented. Dear docker compose fans, docker compose is yaml

Want to make make a difference that will help lots of people, add optional toml support to launcher/pups. That would be huge. ENORMOUS. Imagine if all the effort that went into this topic went to building a toml patch? We would have toml today.

Second area that would be interesting is experimentation with a “admin/upgrade web container” that is separate to Discourse. That way we throw away the existing web updater and replace it with something that talks directly to docker. Lots and lots of details here but it is 100% clear that it is better to have 1 way to upgrade Discourse.


If you don’t already have something in mind, I’ve seen a few solutions to having an admin docker container, that mounts the docker socket to interact with. Specifically it was for service discovery registration through docker, but I don’t see why it wouldn’t work here either.

I proposed a solution over here. (Just make discourse-setup or discourse-multi-setup do that.).

It’s not what you’re talking about, but I’m getting close to having a Discourse installer web app that, given Digital Ocean and Mailgun API keys, configures everything, installs Discourse, and tells you what DNS records to set. Once it’s deployed, I’ll start adding features like rebuilds, plugin management, and so on.

Yeah, we’ve got a bunch of stuff in our infrastructure that talks directly to docker. We even ran registrator for a while, until consul turned out to be a dog’s breakfast.

It’s a big pile of work, though, to build something that operates as a one-stop frontend proxy to everything Discourse-related (because in order to have a web-based control panel that you can access via the same address, you need to have one place listening which then proxies elsewhere) and then passes requests to the “manager” container, Discourse itself, etc etc etc as required. Big job. Yuuuuuge.


I figured you guys had something planned – I’m definitely looking forward to seeing what you guys come up with. Obviously balancing security + accessibility on things like this will be an immense undertaking.

Disheartened to hear that consul+registrator didn’t work out for you guys though; from the onset, it looked fairly elegant as a docker solution. I’m fond of consul, but I’m also at a place with fairly static servers, and that’s less docker-centric, so I’m not actually fighting with it all the time.

(Possibly veering a little off the topic here…)

Yeah, I was heartbroken when consul turned out to be a network pig at scale; it seemed like such a good solution. Registrator was always a bit of a love-hate thing; it really did not want to support IPv6, and occasionally decided it would duplicate registrations under different names and then not clean up after itself. Nothing significant, but enough to make me grumble.


In case this has been missed, the new docker version includes a multi-stage feature, specifically built to handle build steps in a better way than before. I’ll (have to) leave it to the experts to assess whether this helps in any way to solve (at least part of) the issue at hand. See the release announcement blog post linked below:


This has been the #1 reason that has kept me away from Discourse for several years :frowning:

It’s like if one was buying a phone with no OS, but bundled with a firmware flashing kit because the manufacturer argues that you need it.

I for one strongly think that this is an adoption barrier, as silly it may sound.

I hope this changes.

Would you like to help us improve it by showing us how it could be done better? We’d happily do things in a more idiomatic way if it would address all our concerns.


Seems that this guy had the right idea:

And yet… no mention of where one can download a demonstration of this superlative alternative.


In practice, bootstrapping really isn’t a showstopper. Neither in terms of downtime, nor technical skills required.

Even with a relatively large forum and many plugins, a rebuild only takes a fraction of the time that a backup has the forum in read only mode. Our site spans the globe, so far we’ve had not complaints about the forum being offline during a rebuild, or read only during a backup. In fact, only a few users have even noticed rebuilds and backups.

It’s an online discussion platform, not the GPS satellite network.


Found this, works like a charm:

docker-compose up -d and I have a local dev installation in my Windows environment.

It does seem to rely on Bitnami’s package manager and distribution, and external dependencies, but I guess I will take it.


This is why I don’t understand the controversy over decomposing docker containers for Discourse. The best practices are out there and available. Docker itself (not docker-compose) natively supports even single-node swarm mode as one of the primary supported ways of running (using a version of a docker-compose). While the devs continue (fairly) to suggest that someone who wants the feature submit a PR to implement such a change, the fact that third-parties have nowhere near the application and code-knowledge required to do so in an architecturally elegant / sustainable way. The fact is that there is no open-source forum software with the same featureset and quality of Discourse. But the fact remains that this is accruing technical debt from a deployment side as it fails to follow the best-practices and utilize the native features of the build system it uses.

As in commercial software, so too in open-source projects & products the market will decide. The desire to follow standards and best-practices has already resulted in multiple codebase forks. Eventually, if this desire isn’t met, another project will take its place that doe, or a fork of this project will succeed while this one is abandoned. Of course, this won’t take weeks or months but years. The Discourse team are great developers with a tremendous product. I have no doubt they know and love their product more than anyone else. But rather than having folks who know docker attempt to modify this powerful and complex app to adapt it to best practices / standards / native docker features, why not undertake to learn them full (since you are already using them) and leverage the power of compose-based (even multi-stage) builds to deploy your app?


The thing is you are making an assumption here that our current design is somehow limiting from a deployment perspective. This very site auto scaled yesterday from 3 to 10 nodes cause CPU was high for … reasons. This happened automatically, the web image in the AWS container store got deployed on new EC2 instances, magically, with no human intervention.

You are trying to convince me here that we need to bite of a significant piece of work that will certainly make debugging docker in the wild for hobbyists harder for the greater good of following best practices.

Today, when a hobbyist has a problem I tell them:

cd /var/discourse
./launcher rebuild app

It resolves most issues. The script does not need to reason about a pod and 4 inter-related services. We don’t have to worry about end users configuring logging and log rotation and a bunch of other complex stuff.

We do not host a monolith container with db+redis+app, we host web pods / db pods / redis pods and so on which have ipv6 addresses and using service discover from container labels we glue stuff together magically in our environment. Hobbyists do not need any of this.


I made no such assumption. Note that technical debt has nothing to do with ease of deployment or even availability. It’s an issue of code support and best practices in Docker as it continues to be developed.

Not exactly. I’m trying to convince you that you need to bite off the significant piece of work to avoid accrued technical debt you’ll inevitably be working to update anyway if you plan to stick with docker in your deployment methodology. The fact that it will make debugging harder is as assumption you are making. You can easily provide a turnkey solution on top of the composed services, as well as shipping a set of binaries that do the work on behalf of the user if desired. All of this can be done, while cleaning up, and likely even simplyfing the deployment.

1 Like

One more point here. You can easily implement a log-collection container and/or do logrotate within docker containers. Just build it into the image. Further - it depends on your point of view. If you aren’t familiar with docker swarms, but they are very resilient at recovering from many issues. Further, you could provide a backup image whose only purpose is to dump database exports to a bind mount location which can be mounted.