If you do the updates from the UX, eventually you’ll get a message that says that you have to do a command-line update. It depends not on Debian, but on the base Discourse image.
And with the 2-container method there would be no GUI update button at all, correct?
The GUI update comes from the
discourse_docker plugin. If you have that plugin, you have the GUI update.
When there are vulnerabilities discovered in image manipulation tools, remote code exception has definitely happened in the past, which means you are one image upload away from a compromised system.
Clear Linux set the standard for how fast you can boot on Linux. It’s awesome work, wholeheartedly endorse.
Oh, that changes things a bit then. For some reason I was thinking that the GUI updater wouldn’t work with a non-standard 2-container installation. In that case, as long as the admin is technically competent it seems like there aren’t a lot of downsides to a 2-container installation. I definitely want to have GUI updates, for example if I’m traveling with just my phone and a major Discourse security update comes out I can at least apply that without SSH access.
That’s my belief. You basically need to pay attention enough just to know when there is a Postgres or Redis upgrade that requires rebuiding the data container. You also need to know to
./launcher bootstrap web_only && ./launcher destry web_only; ./launcher start web_only, but that isn’t so hard. You can just do a
./launcher rebuild web_only, but that takes down the site while it’s rebuilding.
Just to be complete: The web UI build normally has actually zero downtime; the bootstrap/destroy/start does have minimal downtime and I would only do it as normal with a maintenance page provided externally as with external nginx as documented here. But that is a good practice anyway if only for getting IPv6 addresses into the container.
Very good, thanks. And with a 2-container installation do you still get Discourse dashboard notifications when the container needs to be rebuilt? And then in that case I could determine whether to rebuild just the app or also the data container?
Yes. I see it right now because I haven’t applied the “only the version has changed” 3.1.0.beta1 update.
This is a case of “it’s fine until it’s not” — people panic when the update fails in the UI and they don’t know to
git pull; ./launcher rebuild app to work around the problem. This happens every time there is a change that invalidates the GUI update, I think. It happened again:
I feel like this panic highlights the value of having a consistent, normal update mechanism that avoids this experience.
At the same time, I encountered the also-infrequent case of the bootstrap breaking the running system: Zero-ish-downtime updates do occasionally break like this, once or twice a year maybe on average? So don’t delay between the bootstrap and the destroy/start.
I should update the text to make that clear, so I’ll do that next.