Can Discourse ship frequent Docker images that do not need to be bootstrapped?

How does any of this address:

I would add to this:

  1. cd /var/discourse && ./launcher rebuild app is a scourge for our free installers hate. We want a web UI for that.

How does any of the multi FROM image build fanciness move the needle for 2 or 3 or 4?

It simply does not.

It allows us to cut out a few shell scripts, possibly.

If anyone really wants to help here, there are specifics that can be worked on.

A non-brainer that keeps popping up OVER AND OVER AND OVER again is :rage: yaml the worst config format in the world, hated by humans ever since it was invented. Dear docker compose fans, docker compose is yaml

Want to make make a difference that will help lots of people, add optional toml support to launcher/pups. That would be huge. ENORMOUS. Imagine if all the effort that went into this topic went to building a toml patch? We would have toml today.

Second area that would be interesting is experimentation with a ā€œadmin/upgrade web containerā€ that is separate to Discourse. That way we throw away the existing web updater and replace it with something that talks directly to docker. Lots and lots of details here but it is 100% clear that it is better to have 1 way to upgrade Discourse.

5 Likes

If you donā€™t already have something in mind, Iā€™ve seen a few solutions to having an admin docker container, that mounts the docker socket to interact with. Specifically it was for service discovery registration through docker, but I donā€™t see why it wouldnā€™t work here either.
https://gliderlabs.com/registrator/latest/

I proposed a solution over here. (Just make discourse-setup or discourse-multi-setup do that.).

Itā€™s not what youā€™re talking about, but Iā€™m getting close to having a Discourse installer web app that, given Digital Ocean and Mailgun API keys, configures everything, installs Discourse, and tells you what DNS records to set. Once itā€™s deployed, Iā€™ll start adding features like rebuilds, plugin management, and so on.

Yeah, weā€™ve got a bunch of stuff in our infrastructure that talks directly to docker. We even ran registrator for a while, until consul turned out to be a dogā€™s breakfast.

Itā€™s a big pile of work, though, to build something that operates as a one-stop frontend proxy to everything Discourse-related (because in order to have a web-based control panel that you can access via the same address, you need to have one place listening which then proxies elsewhere) and then passes requests to the ā€œmanagerā€ container, Discourse itself, etc etc etc as required. Big job. Yuuuuuge.

4 Likes

I figured you guys had something planned ā€“ Iā€™m definitely looking forward to seeing what you guys come up with. Obviously balancing security + accessibility on things like this will be an immense undertaking.

Disheartened to hear that consul+registrator didnā€™t work out for you guys though; from the onset, it looked fairly elegant as a docker solution. Iā€™m fond of consul, but Iā€™m also at a place with fairly static servers, and thatā€™s less docker-centric, so Iā€™m not actually fighting with it all the time.

(Possibly veering a little off the topic hereā€¦)

Yeah, I was heartbroken when consul turned out to be a network pig at scale; it seemed like such a good solution. Registrator was always a bit of a love-hate thing; it really did not want to support IPv6, and occasionally decided it would duplicate registrations under different names and then not clean up after itself. Nothing significant, but enough to make me grumble.

5 Likes

In case this has been missed, the new docker version includes a multi-stage feature, specifically built to handle build steps in a better way than before. Iā€™ll (have to) leave it to the experts to assess whether this helps in any way to solve (at least part of) the issue at hand. See the release announcement blog post linked below:

5 Likes

This has been the #1 reason that has kept me away from Discourse for several years :frowning:

Itā€™s like if one was buying a phone with no OS, but bundled with a firmware flashing kit because the manufacturer argues that you need it.

I for one strongly think that this is an adoption barrier, as silly it may sound.

I hope this changes.

Would you like to help us improve it by showing us how it could be done better? Weā€™d happily do things in a more idiomatic way if it would address all our concerns.

5 Likes

Seems that this guy had the right idea: https://meta.discourse.org/t/a-new-project-aimed-playing-discourse-with-docker-compose-or-rancher-compose-and-less-rebuilds/56780/6

And yetā€¦ no mention of where one can download a demonstration of this superlative alternative.

5 Likes

In practice, bootstrapping really isnā€™t a showstopper. Neither in terms of downtime, nor technical skills required.

Even with a relatively large forum and many plugins, a rebuild only takes a fraction of the time that a backup has the forum in read only mode. Our site spans the globe, so far weā€™ve had not complaints about the forum being offline during a rebuild, or read only during a backup. In fact, only a few users have even noticed rebuilds and backups.

Itā€™s an online discussion platform, not the GPS satellite network.

7 Likes

Found this, works like a charm: https://github.com/bitnami/bitnami-docker-discourse

docker-compose up -d and I have a local dev installation in my Windows environment.

It does seem to rely on Bitnamiā€™s package manager and distribution, and external dependencies, but I guess I will take it.

3 Likes

This is why I donā€™t understand the controversy over decomposing docker containers for Discourse. The best practices are out there and available. Docker itself (not docker-compose) natively supports even single-node swarm mode as one of the primary supported ways of running (using a version of a docker-compose). While the devs continue (fairly) to suggest that someone who wants the feature submit a PR to implement such a change, the fact that third-parties have nowhere near the application and code-knowledge required to do so in an architecturally elegant / sustainable way. The fact is that there is no open-source forum software with the same featureset and quality of Discourse. But the fact remains that this is accruing technical debt from a deployment side as it fails to follow the best-practices and utilize the native features of the build system it uses.

As in commercial software, so too in open-source projects & products the market will decide. The desire to follow standards and best-practices has already resulted in multiple codebase forks. Eventually, if this desire isnā€™t met, another project will take its place that doe, or a fork of this project will succeed while this one is abandoned. Of course, this wonā€™t take weeks or months but years. The Discourse team are great developers with a tremendous product. I have no doubt they know and love their product more than anyone else. But rather than having folks who know docker attempt to modify this powerful and complex app to adapt it to best practices / standards / native docker features, why not undertake to learn them full (since you are already using them) and leverage the power of compose-based (even multi-stage) builds to deploy your app?

6 Likes

The thing is you are making an assumption here that our current design is somehow limiting from a deployment perspective. This very site auto scaled yesterday from 3 to 10 nodes cause CPU was high for ā€¦ reasons. This happened automatically, the web image in the AWS container store got deployed on new EC2 instances, magically, with no human intervention.

You are trying to convince me here that we need to bite of a significant piece of work that will certainly make debugging docker in the wild for hobbyists harder for the greater good of following best practices.

Today, when a hobbyist has a problem I tell them:

cd /var/discourse
./launcher rebuild app

It resolves most issues. The script does not need to reason about a pod and 4 inter-related services. We donā€™t have to worry about end users configuring logging and log rotation and a bunch of other complex stuff.

We do not host a monolith container with db+redis+app, we host web pods / db pods / redis pods and so on which have ipv6 addresses and using service discover from container labels we glue stuff together magically in our environment. Hobbyists do not need any of this.

10 Likes

I made no such assumption. Note that technical debt has nothing to do with ease of deployment or even availability. Itā€™s an issue of code support and best practices in Docker as it continues to be developed.

Not exactly. Iā€™m trying to convince you that you need to bite off the significant piece of work to avoid accrued technical debt youā€™ll inevitably be working to update anyway if you plan to stick with docker in your deployment methodology. The fact that it will make debugging harder is as assumption you are making. You can easily provide a turnkey solution on top of the composed services, as well as shipping a set of binaries that do the work on behalf of the user if desired. All of this can be done, while cleaning up, and likely even simplyfing the deployment.

1 Like

One more point here. You can easily implement a log-collection container and/or do logrotate within docker containers. Just build it into the image. Further - it depends on your point of view. If you arenā€™t familiar with docker swarms, but they are very resilient at recovering from many issues. Further, you could provide a backup image whose only purpose is to dump database exports to a bind mount location which can be mounted.

Can you explain to me how any of this is going to improve hobbyist adoption of discourse or make it easier for us to support hobbyists installing discourse on 10 dollar digital ocean droplets?

Btw we already logrotate within our container

2 Likes

I do wonder about the impact of downtime due to rebuilds. The thing that concerns me is Googleā€™s attitude to it:

1 Like

What kind of technical debt do you think weā€™re building up here? Also, the argument that weā€™re not following ā€œbest practicesā€ in Docker is really flawed here. Best practices are meant to act as guidelines and not the absolute truth. For hobbyist installs, we think shipping a single container that contains all the services is the best practice that we can support in a sane way.

Just putting this out there :wink:

10 Likes