Can Discourse ship frequent Docker images that do not need to be bootstrapped?

Sure, I’ve always estimated that engineers cost $10,000 per person per month – but as you can read here some people say it should be higher:

A rule of thumb is that an engineer (the most common early employee for Silicon Valley startups) costs all-in about $15k per month

  • If it takes two engineers four months to build out a big Discourse installation, that means it costs the company between $80,000 and $120,000 just to get Discourse deployed and configured and stable at high volume.

  • If we assume that, after the initial setup period, ongoing maintenance is perhaps 3 days of engineering time per month for those same two engineers, assuming equivalent hourly rates, that’s between $3,000 per month and $3,750 per month ongoing. Maybe a bit less, but certainly that’s a reasonable estimate.

This makes our enterprise hosting plan – where we guarantee high availability, ultra-fast speeds (with CDN), and super redundancy for extreme scale – seem like quite a bargain at $1.5k/month.

(We also have an emerging enterprise VIP tier for clients that need more ongoing custom development work.)


Ok, let’s go back to earth :slightly_smiling:
Based on this, I did this docker-compose.

This should be easier to scale than the original one and follow docker principles (although, it can be better engineered).

It is just a proof of concept for now, but as an IndieHoster, I’m selling the hosting service, and I’m comited to maintain this.

Let me know if you need any help to get started with that!

All the best!



As the person who is in charge of all the infrastructure here at Civilized Discourse Construction Kit, Inc, for running things “at massive scale”, and someone who’s been doing ops for a long, long time, at a lot of places, let me add my two cents.

First up, I’d like to disagree with Jeff’s third point just a tiny bit: personally, and speaking entirely and only for myself, I think it would be inconsequential to CDCK’s business if there was a one-click “massive scale” Discourse installer. My belief is that the people who are willing to pay us, CDCK, for hosting, to support the development of the forum software and to get direct and high-priority access to the minds who know Discourse best, are almost entirely disjoint from the people who are absolutely committed to doing it themselves.

However, it simply isn’t possible to provide a one-click, massive-scale install option that will satisfy more than a tiny percentage of the user population. IT IS IMPOSSIBLE. That may seem like a bold claim, and even a self-serving one, but it’s the inexorable outcome of how the ops world currently works: everyone has their own preferences.

Take even the choice of “orchestration” layer. You suggest using Docker Compose. OK, but what about all those people out there who think Compose is a steaming pile, and wouldn’t touch it with a barge pole? They’ve got their own preferences as to orchestration, and they’d be as dissatisfied with a one-click massive-scale installer as you are with the current state of affairs.

Then there’s the other components in the core hosting infrastructure, like Redis and PostgreSQL. There are no shortage of Discourse users who run it on AWS, whose preference is to use Elasticache to provide Redis, and RDS for PostgreSQL. So our one-click, massive-scale installer would need to be able to account for that, in order to support people running on AWS. But we can’t just detect “oh, you’re running in AWS” and assume we should use Elasticache and RDS, because some people still prefer to run their own Redis and PostgreSQL. The same applies for “private cloud” deployments – some people have existing Redis/PostgreSQL tooling they’d prefer to use, while others would want us to set the data storage up for them.

Of course, massive scale is not valuable without monitoring, and there’s a myriad of options there, all of which we’d need to support, or again we’d be alienating a huge percentage of the potential userbase. Whatever we choose, there’d be a pitchfork-wielding band at the gates of the CDCK compound demanding we support their preferred monitoring system.

In theory, we could defer supporting a myriad of different systems to the open source community, but as you yourself note:

So we can’t really rely on the community to contribute large-scale engineering efforts like that… and we’re back to pitchforks at the gate.

But let’s say, for the sake of argument, that somehow we manage to code up support for everyone’s preferences into one neat little (ha!) package. Would it be a “one-click” massive-scale installer any more? Hell no! It’d be a maze of questions, somewhat like what building your own Linux kernel looks like. The damned thing would be nigh-impossible for anyone who isn’t deeply involved in the development of the system to be able to navigate without blowing their highly-available foot off.

All this highlights the stark reality of ops: running at scale is not easy. You need smart people who know all the aspects of the system they’re working in. There’s no way around that, and there’s no “one-click” installer that is ever going to solve that problem. Sure, aspects of the problem with be solved over time – Docker is as close to a pervasive solution to the “shoehorn a single software program into it’s own environment” problem, to the chagrin of some – and other parts will no doubt be standardised, but no matter what, running at the front of the pack will always involve a lot of skull-sweat and custom work.


I don’t feel like you read and understood my post. I’m really not asking for a one-click high availability solution.


I am very confused what exactly are you asking for? Can you provide 5 one sentence bullet points?


There is a lot of knowledge in our base image, we install particular versions of ImageMagick, pngcrush etc. We install a particular version of Ruby and jemalloc.

When it comes to running Discourse we are particular about using unicorn and forking out sidekiq to conserve memory.

Nothing is stopping you building images base on our base images and then launching them with composer if that is how you roll.

But there is a lot of knowledge baked into our process you are throwing away if you simply put your hands up and say “you are doing it wrong” and go and start from scratch.

On topic of compose:

  • How are you going to run asset precompilation and db migration?
  • How are you going to allow people to run plugins?
  • How are you going to ensure people don’t have 700gigs of log files that are not rotated?
  • How are you going to provide your users with a 1 click upgrade from a web page?

Nothing is stopping you running Redis, Postgres and web using our base images in a way that can be orchestrated, just bootstrap an image and push it to your docker repo.

Nothing is stopping you bootsrapping your images and then running all the security patches, and then pushing to your repo.

I am completely unclear on what you are saying here. We accept configuration that tells us where redis is via an ENV var. You can do what you will.

Nothing leaks through, to run launcher all you need is bash, once stuff is bootstrapped you have images that you can use as you will anywhere.


How exactly is try_files going to work? now we would need a data volume for our precompiled assets that gets somehow shared between 2 images.

And a much more practical question, why does somebody who got a $10 install on digital ocean care about any of this?

All they want is a simple mechanism for installing Discourse that is secure, robust, will not chew up all their disk space with logs and offers 1 click upgrade. This hangup on “though shalt only use tool XYZ” is not really helping them.


Might be more practical to focus on these, I know #1 is coming because Meta is running on PG 9.5 right now, as well as at least one other new customer…

I think refinements to the current process are more interesting and useful than a “start everything over from scratch” mentality.


Or, since this is entirely open source software we’re dealing with here, someone can demonstrate how wrong we are, by repackaging Discourse in a more minimalist container. I’d love to see different ways of packaging complex webapps like Discourse, but so far, despite several people talking about it, I haven’t noticed a single alternate approach get a lot of traction (the fact that multiple people have published separate docker-compose recipes seems to suggest that there isn’t a single “best practice”, universally-applicable approach to using that tool).

As Sam suggested, then, it would be nice if you could provide a summary. I did read your post, and as far as I could tell, it was asking for exactly that sort of a thing, with HA Redis, PostgreSQL, and all the rest “baked in”:

Also, one thing I forgot to mention previously, in specific response to:

Nothing needs to be fixed in the bootstrap process, because that is, in fact, exactly what ./launcher bootstrap will do for you – in fact, that’s exactly the command we use to build the images that run our internal hosting environments.


Hi! Thanks for your great work!
As I see now, problem with current infrastructure is that it brakes docker rules, like immutable containers and one process per container.

  1. Immutable containers. As I read in topic, its almost impossible to make immutable containers (for app servers), because it will brake a lot of things, lets skip it, we cant do anything right now.

  2. One process per container. I think its very easy to remove templates from launcher and add single template, where each part of discourse running in separate container (nginx, pg, redis, unicorn, sidekiq and so on) with only one process per container. As I see, there are no problems with separating things. And you get isolated services to manage. In all terms its better, than have all-in-one container (or almost-all-in-one). Also, if things are separated, you have potential to scale: run multiple unicorn containers easyly. With latest version of docker its possible to run multi-server docker swarm and scale well. After this step, simply add service discovery(consul, registrator and consul-template) and you have nice system. What do you think?


Can you define that term precisely? The containers built by launcher are certainly “immutable” in the sense that the container image itself doesn’t change during execution (all stateful information is held in external volumes) and more instances of a container can be easily spawned on separate machines. Other than that, I don’t know what you could be referring to.

I think someone should definitely build that. I don’t think anyone at CDCK will be taking the lead on it, though, because the current arrangement seems to work well for the target audience of the all-in-one container: small, standalone sites without any existing infrastructure to integrate with. There are higher priorities for the dev team than the work required to support the much smaller percentage of Discourse users that:

  1. Need a larger-scale setup;
  2. Have a common, well-defined, existing infrastructure to plug into; and
  3. Don’t have the skills and knowledge to do the needful themselves.

As Jeff has already said, community contributions in this area would be welcomed, but so far, despite a number of people saying they’ll do something, not a lot of results have been forthcoming which have met with wide acclaim from others.


Pg and redis run fine now in single containers, we have samples for that.

I am not particularly happy about the prospect of splitting nginx and unicorn, cause they are coupled tightly, nginx serves static files directly, so you would need to add one more shared volume between nginx and web that adds complexity for dubious gain.

Unicorn master forks off sidekiq (and web) this means that sidekiq gets to share memory with the parent process. If you split the two services you waste memory.

Memory is our #1 issue we have on our digital ocean installs, we can not afford to regress here.


You are absolutely right. Files in containers dont change, except for shared volumes. Best way is to serve ready containers with precompiled assets and so on. But thats not possible, please read full phrase.[quote=“arrowcircle, post:31, topic:33205”]
As I read in topic, its almost impossible to make immutable containers (for app servers), because it will brake a lot of things, lets skip it, we cant do anything right now.

I think people dont want to do anything after very aggressive discussions. Its very easy to brake motivation of open source contributors. Maybe thats why there are only few pull requests coming not from dev team.

Nginx already serves files from shared volume, there are no problems with separating nginx to container and adding same shared volume. You have part of the services running each in separate container, and some services are joined. Why not to finish separation process and make one multicontainer template?

Wow, nice solution. Is there a situation, when you need to scale sidekiq and dont scale unicorns? In this case, how much memory you spend on sidekiq, thats not needed? Ugly and simple solution for this - is to add env switch not to run sidekiq with unicorn. Unfortunately, I think no one ever will need this.

Do you have a strategy to deal with it?

1 Like

It sure does, it serves the uploads directory, it does not serve the public directory where all the minified css and js lives. As a rule we do not keep data that changes every build in volumes, cause then we need to worry about garbage collection.

1 Like

Number of unicorn workers and sidekiq workers can be controlled via env vars


As I see, all easy steps to separation are done? Further steps will need a lot of work to make new build system.

1 Like

Thing is, people who tend to be strongly in the single process per container camp, tend to see red when you boot init or do anything that strays from single process per container.

I am strongly in the “whatever works and solves the problem” camp.

I can give users an nginx container with no cron and they can exhaust disk space cause docker is pretty lousy at dealing with logs and getting log rotate right is hard.

I can give people a container for sidekiq and a seperate one for unicorn web workers, and memory footprint goes up.

I can let them install whatever pg they want and deal with the nightmare of upgrading pg themselves

What we have now works for us, it is quite flexible, we need more work around a better config format cause the yaml nightmare keeps biting us, we need a clean upgrade to 9.5. We need a 100% automated letsencrypt template. Plenty of stuff to do, and we get plenty of help from the community, we accept prs to improve things and so on.

The discussion got heated cause it is hard not to be defensive when people tell you, you are doing it wrong


The simplest solution I could think of would need this:

  • move everything critical the launcher scripts do to a script inside the container
  • add settings for the most critical stuff like domain, SMTP, admin password to the admin interface, or in that shared (out of container) directory

So, we can share pre-bootstraped dockerfiles like:

FROM sam/discourse:x.yy
ADD myconfig.yml /root/myconfig.yml
RUN launcher bootstrap myconfig
CMD something

1 Like

I made a proof of concept, that is capable of launching discourse with a bogus domain and email settings. (Not useable though, because you wouldn’t be able to register)

I stripped the launcher script of anything that isn’t the bootstrap code, and modified it to call ruby directly, instead of using docker for it. So it is executed in a build step for the docker container.