Can Discourse ship frequent Docker images that do not need to be bootstrapped?

I figure I should try and explain my perspective, because this topic is basically at the root of me being very concerned about deploying Discourse in production and committing sysadmins to supporting it being highly available.

I think what worries me most is that there are assumptions, expectations, tooling, best practices and principles that generally apply to how people work with docker images, and you manage to break almost all of them. The system you have currently is surprising, which is rarely a good thing, and has added to the issue that I can’t assume anything you’re doing follows convention.

  • For a highly available system I need to set up Redis, Postgres and the web app in a way where they can be managed, monitored and preferably clustered.
  • For security, I really want to know how to make sure that the system is security patched regularly, locked down and only has the minimum required stuff installed. I need access to application logs, to know what’s in there and subscribe to relevant security advisories.
  • For quality, I want to know that dependencies like Redis which have slightly different behaviour when running multi-node are developed & tested in that configuration.

We have sysadmins, and to some degree the wrapping you have gets in the way of them being able to do their jobs effectively because they have to learn something new that’s not really got anything to do with those systems.

From my perspective, nobody should care if you have multiple docker images for each component of your service (this is not unusual or surprising), but we should care if you package everything into a single container, because it breaks things and behaves differently than the setup you recommend - the configuration of each service component has the ability to affect others. The fact that you say a multi-container setup is what people should be using points to this being how it should be developed too, and if you should be using an HA redis cluster in production, it’s great to be able to use the same setup in development.

Your “philosophy” agnostic design actually breaks the elements of the Docker philosophy that allow you to cluster applications in Docker Swarm, Tectonic, Kubernetes, Amazon ECS, Google’s Kubernetes, CoreOS… I just can’t buy the idea that you’re being philosophy agnostic when you’re breaking compatibility and standards.

I believe that your Ruby dependency leaks through to the host machine through “launch”. In the cases of the lightweight container OSes this means that you can’t use Discourse easily (since CoreOS doesn’t really support you installing things because it has no package manager).

When you say that launch performs tests, I would hope that you could do that by layering that on top of Compose (Compose the environment + a test container), or providing another image that exists only to run those tests. Launch can morph (in part) into a container that takes template files and produces config files (such as nginx config, shell script variables). You can make containers that exist only to run acceptance / integration tests, and a container can be as small as 10MB using golang.

Assuming that I understand what you pointed out about using a dockerfile, you noted that it creates a lot of layers. This can be true if you do a lot of lines, but I believe that if you simply bundle a shell script that wraps up a lot of smaller steps into a larger one and execute that, you only have a single layer. This method lacks the nice hashing on the line to determine when you’ve changed something, but I don’t think you’re really benefitting from that anyway. You could potentially version your source script by adding its content hash value to the filename, and then each change to it could force an automatic re-run of the transformation steps. See: http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/

As I understand Docker Compose, service startup ordering is based on links and volumes, but can also be done by asking docker compose to start them up in the required order explicitly. I believe that what you need there comes down to a small batch script that orders the compose calls.

It’s not surprising (to me at least) that the upgrade process for Postgres does not run as part of the standard images. For one, I would be amazed if you couldn’t use the standard docker mechanisms to override the default CMD (Running containers | Docker Docs), and secondly the postgres images provide documentation on how to extend them:

How to extend this image
If you would like to do additional initialization in an image derived from this one, add one or more *.sql or *.sh scripts under /docker-entrypoint-initdb.d (creating the directory if necessary). After the entrypoint calls initdb to create the default postgres user and database, it will run any *.sql files and source any *.sh scripts found in that directory to do further initialization before starting the service.

In a clustered high availability setup I would expect a DBA to be managing the database, or in the case of RDS, this might be handled by AWS themselves. A lot of work is typically expended on 0-downtime database migration, and the ability to run multiple versions of applications side by side and rollback, particularly around schema changes and data migration. To me, baking this process in to the bootstrap feels like it makes a lot of assumptions about the environment.

By not having a single process entrypoint be the way that you run your container, you miss out on other things like containers stopping when the program crashes, and things like the docker logging driver. That makes your application difficult to integrate with centralized logging if we want to, and more complex to monitor.

While this is really quite personal as a view, I tend to think of any batch script that’s longer than 10 or so lines (and has logic in it) to be a sign of something bad - at that point you should use a scripting language. A 650 line bash script is practically a harbinger of the apocalypse.

For configuration management, it feels entirely wrong to be discussing Chef and Puppet (that’s just in this thread though), since these are generally speaking tools for reaching in and changing VMs. There is a reason that CoreOS and others are using etcd / consul, and it’s largely that for high availability clusters, services may have to subscribe to and change config on the fly. Both also support distributed locks.

A number of your templates are actually interactions with the nginx configuration. The normal way of running ruby in a container (iirc) is not to use nginx (which you would on a VM to get a decent level of parallelism) but to just use more containers. Pulling Nginx out seems like a good idea, especially since there’s a template for SSL configuration in there - if it’s your front facing layer it’s even more important for sysadmins to be able to configure it and ensure it’s patched separately and preferably network segmented away from direct DB access. We may be load balancing servers anyway using ELBs, HAProxy or something else, and while it’s nice to have a starter nginx config generated, I want my sysadmins to be able to manage it without learning your templating system or re-bootstrapping the application.

Decomposing the service components may cause a lot of the templating shrink, because you’re not needing to mess around with a significant amount of configuration on a single machine, and a lot of these things can be handled by much smaller modifications and extensions of standard images (thus removing a lot of the installation, and cleaning up what things are requirements for what). The overview of your application becomes much simpler, as the service architecture becomes clearer.

Note that none of this should preclude providing a simple “out of the box” configuration for developers, small installations and trials, but it becomes easier for sysadmins to manage as needed. (See: How to use Docker Compose to run complex multi container apps on your Raspberry Pi · Docker Pirates ARMed with explosive stuff). Fixing your bootstrap to just produce an image totally answers the “how do I make sure the same thing is running on multiple machines”, because that’s exactly what having a docker image is supposed to give you, by relying on version tags or a local private repository.

I hope that I’ve done a semi-reasonable job of explaining why I find some of the choices concerning, and why they potentially make it more difficult than it needs to be for our teams to take responsibility for and own a deployment of Discourse. I really want to like it, because it means we don’t have to build or fork a forum solution to make it fit, but I also have to consider the cost of operation and SLAs we can give.

40 Likes