Maybe we just implemented the offline page because we started using the upgrade procedure »stop container, git pull, launcher rebuild« after being hit by things like [1,2,3] for a few times actually.
Maybe something changed on the robustness of killing PostgreSQL if it wouldn’t shutdown in time to run through the upgrade process smoothly.
Either way, the online upgrade (again) worked well for us when giving it another shot right now. So, nevermind and sorry for the noise.
That’s a bit confusing, since what follows is a guide for installing and configuring nginx outside the container.
In any case, today I realized an additional benefit of this external nginx configuration: If you are used to seeing IP addresses 127.0.0.1 or your docker address (probably starting with 172.) as a registration or login IP address, it may have been because IPv6 traffic being forwarded into the container did not carry its IPv6 address along with it, unlike IPv4. With this configuration, you will now see correct IPv6 addresses instead of local addresses.
In other words, this configuration is essentially required for correct function of a useful administrative tool on the increasingly-IPv6 internet. (In the US, this includes lots of mobile traffic.)
Thanks for this very helpful guide! A couple of comments:
I think sudo apt-get install letsencrypt has been replaced with sudo apt-get install certbot. Running the former, I get the notice Note, selecting 'certbot' instead of 'letsencrypt'
A friend noticed that Facebook sharing of the site showed a preview of “301 moved permanently”.
Edit: I had originally replaced the location / section of the port 80 server block with the location / section of the port 443 server block. But I think that’s redundant. Instead I just deleted the port 80 server block, which served as a redirect block, and added:
listen [::]:80;
listen 80;
in the appropriate section of the main server block.
I also enabled https redirect (not sure if that’s necessary) from within Discourse settings.
That fixed the issue with FB sharing, and it does seem as though regular HTTP requests are being redirected to HTTPS. If there is another or better method, please let me know.
Thanks for the tutorial, it’s great. Now my 502 page looks much better.
In my practical case I have to add the nginx configuration to the /etc/nginx/sites-enabled/discourse.conf file.
I have successfully installed Discouse after nginx is running wordpress.
I ran into the problem of nginx not knowing about the renewed cert because it wasn’t restarted as installed following this guide. For me, the solution was:
Thanks for the tutorial, which works quite fine for me.
I was just wondering: If the for example googlebot sees that error page - would it know that this is a temporary page? Or do we need to send some kind of Error Code to make it aware of the temporary nature of the change?
I would rather not see google delete all indexing done to my forum because of a more fancy error page…
This means "When encountering (or originating) a 502 Bad Gateway response code, send the contents of the /errorpages/discourse_offline.html file with a 502 Bad Gateway response code. The = is what tells it what HTTP response code to send.
It’s all good!
And I concur with @ashs; a minute or less of 502 once or twice a month hasn’t harmed search. I often see recent posts returned in google results.
A 502 probably indicated your Nginx is not starting, probably due to a configuration error. Running nginx -t tells you whether the configuration file looks fine. If there are no errors, run systemctl status nginx.service to check the status of the Nginx service.
My question directly relates to the topic title, but not to the method used in this topic, so I hope it is OK to keep it in this discussion.
I setup something very simple to solve this problem and have a specific question.
I setup a separate droplet in DigitalOcean and installed a LAMP server via the marketplace. I then uploaded a basic html page with some images to indicate the server was down for maintenance. I would then float an IP between my regular Discourse server and this maintenance server as required.
Here is the question: in order for the ‘maintenance’ server to load correctly, I ultimately needed to get a certificate through certbot for that server (in addition to the one I already had for the main discourse instance). In other words, 2 certificates for the same domain on different servers. It worked, but I have always been concerned about whether that could mess things up in the future. The reading I have done online suggests it is OK to have this, but I wanted to see if anyone has had direct experience with this.
This is perfectly valid to do. However, depending on how you performed validation, certificate renewals may not work – for example, if your “maintenance” server is using HTTP-based validation, this will fail while the domain isn’t pointed to it, which probably defeats the purpose. It might make sense to have the maintenance server occasionally copy over the most recent certificate from the main server instead of requesting one from Let’s Encrypt.
I will admit to not having any idea if my server is using HTTP-based validation (I just did everything through that amazing certbot) but your concern is entirely logical. I looked around a bit, but can’t find any resources on how to copy certificates like you suggest. Also, I assume I would need some sort of cron job setup. If you have any further suggestions, that would be great. Otherwise, thanks again for your help.
To copy files directly from server to server, scp or rsync are good tools to use – this may be a good place to get started.
My suggestion would indeed be to have cron job copy the certificate from the main to the maintenance server regularly
Oh, and to explain the background of HTTP-based validation: To check the domain really belongs to you, Let’s Encrypt will request a specific file from your server and expect a certain response. Certbot can handle this automatically (configuring your server temporarily to return this file for the validation request), but of course, this only works if the request actually reaches your server. If your DNS doesn’t point to your server, or you moved the IP somewhere else, the request will go to the wrong server, Let’s Encrypt will not get the expected response, and refuse to sign the certificate.
If you want a “under construction” page while the site rebuilds, you’ll need to do the onerous steps above. I’d recommend switching to a 2-container installation, which is somewhat more trouble to maintain (you have to know when to rebuild the data container), but has only about 30 seconds of downtime when the new container boots up, but does require a fair amount of ram right now (2GB might not be enough, but I’m not entirely sure).