cd /var/discourse && ./launcher rebuild app is a scourge for our free installers hate. We want a web UI for that.
How does any of the multi FROM image build fanciness move the needle for 2 or 3 or 4?
It simply does not.
It allows us to cut out a few shell scripts, possibly.
If anyone really wants to help here, there are specifics that can be worked on.
A non-brainer that keeps popping up OVER AND OVER AND OVER again is yaml the worst config format in the world, hated by humans ever since it was invented. Dear docker compose fans, docker compose is yaml
Want to make make a difference that will help lots of people, add optional toml support to launcher/pups. That would be huge. ENORMOUS. Imagine if all the effort that went into this topic went to building a toml patch? We would have toml today.
Second area that would be interesting is experimentation with a āadmin/upgrade web containerā that is separate to Discourse. That way we throw away the existing web updater and replace it with something that talks directly to docker. Lots and lots of details here but it is 100% clear that it is better to have 1 way to upgrade Discourse.
If you donāt already have something in mind, Iāve seen a few solutions to having an admin docker container, that mounts the docker socket to interact with. Specifically it was for service discovery registration through docker, but I donāt see why it wouldnāt work here either. https://gliderlabs.com/registrator/latest/
I proposed a solution over here. (Just make discourse-setup or discourse-multi-setup do that.).
Itās not what youāre talking about, but Iām getting close to having a Discourse installer web app that, given Digital Ocean and Mailgun API keys, configures everything, installs Discourse, and tells you what DNS records to set. Once itās deployed, Iāll start adding features like rebuilds, plugin management, and so on.
Yeah, weāve got a bunch of stuff in our infrastructure that talks directly to docker. We even ran registrator for a while, until consul turned out to be a dogās breakfast.
Itās a big pile of work, though, to build something that operates as a one-stop frontend proxy to everything Discourse-related (because in order to have a web-based control panel that you can access via the same address, you need to have one place listening which then proxies elsewhere) and then passes requests to the āmanagerā container, Discourse itself, etc etc etc as required. Big job. Yuuuuuge.
I figured you guys had something planned ā Iām definitely looking forward to seeing what you guys come up with. Obviously balancing security + accessibility on things like this will be an immense undertaking.
Disheartened to hear that consul+registrator didnāt work out for you guys though; from the onset, it looked fairly elegant as a docker solution. Iām fond of consul, but Iām also at a place with fairly static servers, and thatās less docker-centric, so Iām not actually fighting with it all the time.
Yeah, I was heartbroken when consul turned out to be a network pig at scale; it seemed like such a good solution. Registrator was always a bit of a love-hate thing; it really did not want to support IPv6, and occasionally decided it would duplicate registrations under different names and then not clean up after itself. Nothing significant, but enough to make me grumble.
In case this has been missed, the new docker version includes a multi-stage feature, specifically built to handle build steps in a better way than before. Iāll (have to) leave it to the experts to assess whether this helps in any way to solve (at least part of) the issue at hand. See the release announcement blog post linked below:
Would you like to help us improve it by showing us how it could be done better? Weād happily do things in a more idiomatic way if it would address all our concerns.
In practice, bootstrapping really isnāt a showstopper. Neither in terms of downtime, nor technical skills required.
Even with a relatively large forum and many plugins, a rebuild only takes a fraction of the time that a backup has the forum in read only mode. Our site spans the globe, so far weāve had not complaints about the forum being offline during a rebuild, or read only during a backup. In fact, only a few users have even noticed rebuilds and backups.
Itās an online discussion platform, not the GPS satellite network.
This is why I donāt understand the controversy over decomposing docker containers for Discourse. The best practices are out there and available. Docker itself (not docker-compose) natively supports even single-node swarm mode as one of the primary supported ways of running (using a version of a docker-compose). While the devs continue (fairly) to suggest that someone who wants the feature submit a PR to implement such a change, the fact that third-parties have nowhere near the application and code-knowledge required to do so in an architecturally elegant / sustainable way. The fact is that there is no open-source forum software with the same featureset and quality of Discourse. But the fact remains that this is accruing technical debt from a deployment side as it fails to follow the best-practices and utilize the native features of the build system it uses.
As in commercial software, so too in open-source projects & products the market will decide. The desire to follow standards and best-practices has already resulted in multiple codebase forks. Eventually, if this desire isnāt met, another project will take its place that doe, or a fork of this project will succeed while this one is abandoned. Of course, this wonāt take weeks or months but years. The Discourse team are great developers with a tremendous product. I have no doubt they know and love their product more than anyone else. But rather than having folks who know docker attempt to modify this powerful and complex app to adapt it to best practices / standards / native docker features, why not undertake to learn them full (since you are already using them) and leverage the power of compose-based (even multi-stage) builds to deploy your app?
The thing is you are making an assumption here that our current design is somehow limiting from a deployment perspective. This very site auto scaled yesterday from 3 to 10 nodes cause CPU was high for ā¦ reasons. This happened automatically, the web image in the AWS container store got deployed on new EC2 instances, magically, with no human intervention.
You are trying to convince me here that we need to bite of a significant piece of work that will certainly make debugging docker in the wild for hobbyists harder for the greater good of following best practices.
Today, when a hobbyist has a problem I tell them:
cd /var/discourse
./launcher rebuild app
It resolves most issues. The script does not need to reason about a pod and 4 inter-related services. We donāt have to worry about end users configuring logging and log rotation and a bunch of other complex stuff.
We do not host a monolith container with db+redis+app, we host web pods / db pods / redis pods and so on which have ipv6 addresses and using service discover from container labels we glue stuff together magically in our environment. Hobbyists do not need any of this.
I made no such assumption. Note that technical debt has nothing to do with ease of deployment or even availability. Itās an issue of code support and best practices in Docker as it continues to be developed.
Not exactly. Iām trying to convince you that you need to bite off the significant piece of work to avoid accrued technical debt youāll inevitably be working to update anyway if you plan to stick with docker in your deployment methodology. The fact that it will make debugging harder is as assumption you are making. You can easily provide a turnkey solution on top of the composed services, as well as shipping a set of binaries that do the work on behalf of the user if desired. All of this can be done, while cleaning up, and likely even simplyfing the deployment.
One more point here. You can easily implement a log-collection container and/or do logrotate within docker containers. Just build it into the image. Further - it depends on your point of view. If you arenāt familiar with docker swarms, but they are very resilient at recovering from many issues. Further, you could provide a backup image whose only purpose is to dump database exports to a bind mount location which can be mounted.
Can you explain to me how any of this is going to improve hobbyist adoption of discourse or make it easier for us to support hobbyists installing discourse on 10 dollar digital ocean droplets?
What kind of technical debt do you think weāre building up here? Also, the argument that weāre not following ābest practicesā in Docker is really flawed here. Best practices are meant to act as guidelines and not the absolute truth. For hobbyist installs, we think shipping a single container that contains all the services is the best practice that we can support in a sane way.