This is an official docker image - official here is an adjective for docker. Think of it as a debian package.
In debian, I wouldn’t need your agreement to make a debian package (except if you specifically disagree).
In docker, it is a docker policy, I need an agreement from you, it is better.
This still doesn’t mean you have to support it. We could still state that the official way of installing it is with the laucher and this is not official discourse thing, just a community effort.
Ok, just realized it was the wrong link in the original issue, I pointed to this issue with a bit more background. (will updated the post)
As a day job, I work as a consultant to help companies to migrate to kubernetes.
During my docker trainings, I actually make people dockerize discourse I find it nice as it is a real example!
I developed a docker-compose PaaS called libre.sh and we are rebooting the project with kubernetes. I just think that I’ll migrate first the discourses, that’s why I’m here again to discuss about this topic.
What makes me want so hard this official image is the talk for the freedom box. I want to push my changes upstream to collaborate. Of course, I can maintain my little docker image on my side, but many people are doing the same, and it is so much valuable engineering time lost… it makes me sad.
I hope I managed to convince you, i still have some arguments in my pocket!
I never did that (yet - never had to), but I’ll try in the coming weeks, now I’m curious
(and well, this question, is obviously not part of the discourse docker image)
I really do think you did a great job with this launcher, but we are sys admins, and I think we all love our standardization, and this is not standard. i think it is really perfect for the one that want to quickly get started on a droplet. But yeah, for docker image, people will have to deal with that themself.
And we’ll need to put a big warning about that specific.
I see that you run sidekiq and unicorn in separate containers, so that mean it will use, at least, double the resources when using this setup. No problem, since this will be only for very advanced setups.
Also, assets / migrations run on every boot.
And, what do you think about plugin support? Today, we use clone those before the bundle install using a launcher hook.
It absolutely is - getting that right is pretty much the most complex part of the launcher & build process. (This is partly because it doesn’t run very often, so there’s not very many chances to improve it.)
Last time it happened, if I remember correctly, people had problems for weeks (as the updates filtered in) due to a very long list of reasons the update could fail.
As your docker-compose stands now, you’re relying on either (1) the official postgres image recognizing and performing an upgrade or (2) do it manually.
While (1) would be nice in the case of a skipped upgrade & an upgrader that can’t skip that far - you could use the intermediate images! it’s great!
However, I don’t think the official postgres images will actually do that for you.
Which resources? a docker container is really just a process, so as now, it will use the memory for 2 processes. It will share the docker image, and create a writable layer on top. So the added resource is really just this new layer, but it is really small.
Yes, for that, at first at least, it will be manual.
I really like the documentation and approach from mastodon, and will follow that.
Then, on kubernetes, we can get smarter.
Currently, I clone them on the host, and it works okay-ish. I have still to figure some little quirks.
I’ll try to keep it simple for the docker-compose version, but on kubernetes, again, I think we can be smarter.
But this is outside the scope of the docker image in itself.
Once actually, somebody added that logic in the docker image But I’m not sure I like this logic.
I’m open to propositions of course.
This image will have just rails. NginX will run in a separate container.
About the logs, this is not the responsibility of rails to handle this, rails should log to stdout and docker or whatever container runner will take care of those logs.
Just opened a PR to add the possibility to discourse to log to stdout.
@riking again, the best I can do is to warn the user something like:
/!\ This is not officially supported by discourse, if you don’t know what you are doing, use the launcher instead.
For instance, there is no easy path to upgrade the underlying postgres whereas this is handled gracefully in the launcher.
Use this image at your own risks.
What are the next steps? Can you create the repo that can receive the Dockerfile?
Or what is missing from my side?
The concern is that we have a way that works. Introducing another way has the potential to cause a bunch of people trouble. And then they’ll go away with a bad taste for discourse because it didn’t work.
I make a part of my living doing installs (most is from imports).
I’ll be willing to do some testing and see how your setup works. Can I test it out?
Yes, actually, I do want to discuss every little detail, because the details matter. We care very much about the experience of people who decide to self-host Discourse, and invest a lot of time and effort in making that as efficient and straightforward as we possibly can. A lot of hard work has gone into making the current setup work well, and throwing all that out and going back several years just because it doesn’t use this month’s “container orchestration” hawtness does not seem like a good tradeoff. Sure as heck we’re not going to put any sort of “official” stamp, or even tacit endorsement, on a deployment method which is worse in any way than what we have now.