Yes, that’s the only way. The idea is to install a webserver (nginx recommended) to serve the request, if Discourse is up it routes it there. If not it does something else. All the installation process is explained step by step.
However, we would like to reference the related question by @mlinksva at Site maintenance mode during rebuilds? here, as this also resonates with us and does not get solved by the
/errorpages solution yet. This is about improving the generic text “Sorry, we couldn’t load that topic, possibly due to a connection problem.”. We will try to outline this in more detail.
This is perfect when users come along fresh to the site.
Serving a different “Sorry for that” text
However, when navigating within Discourse, it will yell at you like
without reveilling anything about the reason.
As we know you already, there will probably be a customization feature for being able to change that text, right? We might just have missed that. We’ve also not investigated whether the Feature Admin » Backup » Enable read-only would solve that already as outlined in Maintenance Mode?.
Nevertheless, it made sense to us to bring up this topic here again and hope you don’t mind if that would have been silly.
With kind regards,
P.S.: @staff: As this discussion somehow spiraled out of control regarding appropriate Nginx or Web server configuration details, I would like to suggest a thorough refactoring by splitting these posts into an appropriately named topic like “Configuring web server for offline page”. I’m sure you will find a good title. Thanks already if you like that suggestion and find it worth to follow.
Now, we actually do feel silly after finding it right away as a customizable site text block:
We’ve changed the standard text
js.topic.server_error.description to read:
Thanks for listening ;].
Hm. We are not sure if changing that text actually works for us. Do we have to consider something special here when amending this guy?
Also, we would like to mention that during specific time range while the site was down, it was yelling differently like
Do you have an idea how we would be able to change that also?
I use that, but i want a custom offline webpage and im not able to do it
But if only you had told some commands to auto-renew this certificate also. It’d be a complete guide.
Because I’ve seen the link mentioned here. But that link only tells to install the certificate fresh. Or to renew it.
But I couldn’t find the guidance to ‘Auto-renew’ the same.
Good point! I updated this section above in the original post
Has anyone else noticed that they are seeing a more generic 500 error when the upgrade occurs? I might have caught it at a bad time maybe?
When the container is stopped during a rebuild there’s nothing running to provide an error 500.
Has anyone tried to use another Docker container for that to avoid all these manual steps, as suggested in the beginning?
Yes, many have. See How to move from standalone container to separate web and data containers. Do note that separate containers is a more complex install, and many of the guides here on Meta assume a single container (standalone) install. Before you more to separate containers, be sure you understand what the two containers do, and how you’d interact with them going forward. Biggest item to note:
app will no longer be a valid target for the
hm, this topic still mentions “nginx in front” for some reason in two posts…
btw what I actually want is
- to have the offline page discussed here
- redirect http --> https and root domain --> www on my web server. I don’t want to use Cloudflare for that because some of its IPs are banned in Russia.
So as I understand I just need to figure out how to do that in the web only container
Now I’m confused.
Neither of these goals require a separate container setup. Are you looking to configure both of the above steps and independently are looking for separate containers? Or were you looking at separate containers thinking that they’re needed to complete the above?
As I understand the offline (rebuilding) page handling cannot be in the same container, since it will not be running. So the proposed solution in the current topic is to add nginx in front. But as was discussed in this topic, it requires lots of manual steps and OS-specific, so I thought that using another Docker container for this front nginx would be more reliable and easier to maintain.
Ah, now I’m following. In that case, ignore the topic I linked to previously. That discussing separating the Discourse web server from the database. That isn’t needed here.
Installing Nginx in a docker container, rather than directly onto the OS is definitely possible, but I’m not aware of any Discourse-specific guides to do so. If you go this route, please be sure you understand the OP of this topic (the necessary changes to create the offline page and install an nginx proxy in front), and that you are well versed in how docker works, especially configuring networking between two docker containers. Also note that we’re likely to be limited in the help we can provide, as this is not something we have experience doing.
I’ve also realized this is no longer working.
I had implemented @fefrei’s approach back in early november and it was definitely working then. Maybe it’s because I was stopping the container manually and doing a git pull instead of using
4 posts were split to a new topic: Add support for a native offline page when rebuilding
We did exactly that recently and the offline page successfully kicked in.
Right now, we just went for the online upgrade through
/admin/upgrade and found Discourse did not go offline at all! Regardless whether this has been improved recently or not or if we just have been lucky this time, it is great to see an online upgrade happening without experiencing any significant downtime behavior at all.
Discourse should never go offline when running upgrades through Docker Manager (
/admin/upgrade). Does it usually go offline for you? If so, we should start a #support topic about that.
Maybe we just implemented the offline page because we started using the upgrade procedure »stop container, git pull, launcher rebuild« after being hit by things like [1,2,3] for a few times actually.
Maybe something changed on the robustness of killing PostgreSQL if it wouldn’t shutdown in time to run through the upgrade process smoothly.
Either way, the online upgrade (again) worked well for us when giving it another shot right now. So, nevermind and sorry for the noise.