Can Discourse ship frequent Docker images that do not need to be bootstrapped?

Found this, works like a charm: GitHub - bitnami/bitnami-docker-discourse: Bitnami Docker Image for Discourse

docker-compose up -d and I have a local dev installation in my Windows environment.

It does seem to rely on Bitnami’s package manager and distribution, and external dependencies, but I guess I will take it.


This is why I don’t understand the controversy over decomposing docker containers for Discourse. The best practices are out there and available. Docker itself (not docker-compose) natively supports even single-node swarm mode as one of the primary supported ways of running (using a version of a docker-compose). While the devs continue (fairly) to suggest that someone who wants the feature submit a PR to implement such a change, the fact that third-parties have nowhere near the application and code-knowledge required to do so in an architecturally elegant / sustainable way. The fact is that there is no open-source forum software with the same featureset and quality of Discourse. But the fact remains that this is accruing technical debt from a deployment side as it fails to follow the best-practices and utilize the native features of the build system it uses.

As in commercial software, so too in open-source projects & products the market will decide. The desire to follow standards and best-practices has already resulted in multiple codebase forks. Eventually, if this desire isn’t met, another project will take its place that doe, or a fork of this project will succeed while this one is abandoned. Of course, this won’t take weeks or months but years. The Discourse team are great developers with a tremendous product. I have no doubt they know and love their product more than anyone else. But rather than having folks who know docker attempt to modify this powerful and complex app to adapt it to best practices / standards / native docker features, why not undertake to learn them full (since you are already using them) and leverage the power of compose-based (even multi-stage) builds to deploy your app?

1 Like

The thing is you are making an assumption here that our current design is somehow limiting from a deployment perspective. This very site auto scaled yesterday from 3 to 10 nodes cause CPU was high for … reasons. This happened automatically, the web image in the AWS container store got deployed on new EC2 instances, magically, with no human intervention.

You are trying to convince me here that we need to bite of a significant piece of work that will certainly make debugging docker in the wild for hobbyists harder for the greater good of following best practices.

Today, when a hobbyist has a problem I tell them:

cd /var/discourse
./launcher rebuild app

It resolves most issues. The script does not need to reason about a pod and 4 inter-related services. We don’t have to worry about end users configuring logging and log rotation and a bunch of other complex stuff.

We do not host a monolith container with db+redis+app, we host web pods / db pods / redis pods and so on which have ipv6 addresses and using service discover from container labels we glue stuff together magically in our environment. Hobbyists do not need any of this.


I made no such assumption. Note that technical debt has nothing to do with ease of deployment or even availability. It’s an issue of code support and best practices in Docker as it continues to be developed.

Not exactly. I’m trying to convince you that you need to bite off the significant piece of work to avoid accrued technical debt you’ll inevitably be working to update anyway if you plan to stick with docker in your deployment methodology. The fact that it will make debugging harder is as assumption you are making. You can easily provide a turnkey solution on top of the composed services, as well as shipping a set of binaries that do the work on behalf of the user if desired. All of this can be done, while cleaning up, and likely even simplyfing the deployment.

One more point here. You can easily implement a log-collection container and/or do logrotate within docker containers. Just build it into the image. Further - it depends on your point of view. If you aren’t familiar with docker swarms, but they are very resilient at recovering from many issues. Further, you could provide a backup image whose only purpose is to dump database exports to a bind mount location which can be mounted.

Can you explain to me how any of this is going to improve hobbyist adoption of discourse or make it easier for us to support hobbyists installing discourse on 10 dollar digital ocean droplets?

Btw we already logrotate within our container


I do wonder about the impact of downtime due to rebuilds. The thing that concerns me is Google’s attitude to it:

What kind of technical debt do you think we’re building up here? Also, the argument that we’re not following “best practices” in Docker is really flawed here. Best practices are meant to act as guidelines and not the absolute truth. For hobbyist installs, we think shipping a single container that contains all the services is the best practice that we can support in a sane way.

Just putting this out there :wink:


Rebuilds are far from a common or regular practice. Most upgrades can be performed from /admin/upgrade

And for sites where any server error is a problem there are guides on how to present a holding page during the upgrade process.


I have to perform rebuilds when I add or remove plugins, and when they are mandated by the occasional upgrade (several times a year). But yes, it’s not a showstopper for a local community forum like mine, and /admin/upgrade works very well for regular updates.

I present a holding page during my rebuild upgrades for the benefit of users, but how would the Googlebot perceive this? To the bot, my homepage suddenly has no intra-site links, and all the pages the bot has previously indexed have their content replaced with the holding page.

1 Like

If you’re catching all requests with a 302 to show the holding page there should be zero impact. Google confirmed a couple of years back that redirects don’t impact SEO/pagerank:

Gary is the “House Elf and Chief of Sunshine and Happiness at Google” (Webmaster Trends Analyst)

If you’ve just rebranded the 502 error page then yes, it will have an impact.

1 Like

What you don’t understand is that most of the people deploying discourse don’t know what docker is, much less care about best practices.

If you want a docker image, just use pups to generate it yourself. Is that what you guys do, @sam? Do create a separate container image for each enterprise client with their plugins?

If you want to avoid downtime when you need to rebuild, make a 2 container configuration. You can search here or look in discourse-setup for a command line option that will build one for you. Then downtime is limited to the time it takes for the new container to boot up.

If you want zero downtime, configure haproxy with multiple web containers.


There’s a very large number of people running their own servers who are looking for this exactly. I know personally dozens of hobbyists who are looking for a forum which is truly self-hosted, not on a droplet or other VPS. Many self-hosting enthusiasts run multiple apps (think Nextcloud, Plex, Wikis, CMS/Blogs, etc.), whether they actually host them or run them on a VPS. Here’s my situation: I’m running docker swarm for dozens of apps. It seems to me at least that the way the tool works now it can’t be incorporated into a docker swarm with Traefik or HAProxy reverse-proxying requests.

Maybe (hopefully!) I’m mistaken and you can explain or point me to instructions on how to take your final, single completed container, reference it in a new docker-compose and run it in a swarm?

I understand that. But the developers of Discourse do, and they depend on it for their deployments. I apologize, I don’t know what pups is. Looks like it’s related to Ansible, which I don’t know.

Debt in the deployment management code & process. It is very complicated right now with lots of moving parts and things that are difficult to understand and support. One of docker’s primary draws is the encapsulation of prerequisite code and builds which complete prerequisite work with less risk of a user messing something up, especially if the whole thing is ultimately wrapped in a script to create/upgrade the installation. That gives those who don’t (and don’t need to) understand docker well a good solution, and gives engineers or “hobbyists” a solution as they could skip the wrapper and compose things the way they want.

One of the things I think overlooked here is that a major reason for self-hosting is control over data and backups, etc. Docker makes this especially convenient since you can bind volumes and back those up, and even run a container in the stack whose role is backing up the DB to a flat file in the backup location. When you can’t self-host it along with other docker apps, it effectively means you cannot self-host on your server, you need to purchase a VPS just for Discourse, rather than reusing the same hardware and ecosystem that works for the vast majority of apps which use docker in their deployment.

1 Like

By way of disclaimer, I am not employed or compensated by any hardware vendor, Saas provider, or forum software developers.

Think about how many potentially unnecessary Digital Ocean, Linode, and VULTR subscriptions Discourse has been responsible for launching. Then consider that there are companies making a revenue stream hosting Discourse for others in part because it is too complex for them:

The Discourse forum software is fantastic, but quite hard to install and host it yourself. We think it’s too great a product to be limited to a technical audience only.

Then again, the way you’ve modeled your revenue-stream, it makes zero sense making things easier to install and run in simple to use containers which expose only necessary ports and bind mounts, using environment variables for everything else so that deployment is as simple as docker stack deploy or docker-compose up. Like I said - maybe and hopefully I’m the one mistaken, and there’s a way to take the final docker container and deploy it in a swarm or other compose environment with other apps and a reverse proxy.

1 Like

This is exactly the point many folks have been making in this lengthy thread: Is there a solution for those who do know what nano is and can exit VI / VIM just fine? I trust you know your customer base better than I, but I have to imagine that such basic knowledge of Linux is the case for the overwhelming majority of those wanting to self-host open-source software on Linux.

1 Like

Yes. Create a web_only.yml (there is a sample in discourse_docker) and use that to build your docker image with your plugins. Then add it to your swarm. Everything can be configured via environment variables; you can see them in the last line when you to a ./launcher start web_only. It’s not that hard, but you’re not going to get free support here to help you figure it out (and it’s not just you but a whole bunch of people with a zillion different definitions of Best Practices who would need much, much more help than you would).

I can probably help you figure out what you need to know in an hour or two of my time.


Recently I found the commands to fast forward a git clone depth 1:

git checkout --detach #avoid tangle with git tree state
git fetch --depth 1 -f -origin [branch|commit]
git checkout -f [branch|commit]

Thanks to team.


So would running ./launcher bootstrap mybase that had the bundle exec rake db:migrate; bundle exec rake assets:precompile added to the init script do something like that? Just run it against a test database, or maybe strip those rake tasks out of web.template.yml?

EDIT: Found this comment:

  # this allows us to bootstrap on one machine and then run on another

which might answer my question.


A post was split to a new topic: Thank you for creating a polished, performance tweaked product that can be deployed simply