MKJ's Opinionated Discourse Deployment Configuration

I have been running a Discourse forum with a substantial amount of content and plenty of images over the past few years. Maker Forums has over 100GB of images and over 400,000 posts, of which a substantial amount was imported, primarily from Google+, and the rest was created on the site. This post describes elements of how I eventually configured Maker Forums and, later, a few other Discourse instances. This is what I wish I knew when I got started, and which I’ve used to help others avoid some of the same pitfalls for their own Discourse instances.

Time for a wider audience.

This is still a DRAFT.

:warning: Warning: If you are not comfortable working as a Linux systems administrator, this guide is probably not for you. I may not even be aware of all the ways that it presumes knowledge about Linux. If this feels enlightening to read, you may be the target audience. If it feels confusing to read, you are probably not the target audience. If this feels like work, please consider paying CDCK or @pfaffman to run Discourse for you; they know what they are doing. :warning:

:warning: As if that weren’t enough: I have more Linux expertise than Discourse expertise. My opinions come with no warranty. If trying to follow my advice causes anything of yours to break (your Discourse forum, your host system, or your heart) you get to keep both pieces, with all the sharp edges. I have no plans to provide any form of support for the content in this post. :warning:

I plan (but do not promise) to keep this document up to date with my practices covering the Discourse instances that I participate in maintaining. This is written in the form of advice, but I intend it primarily as advice to myself and to any administrators who inherit Discourse deployments that I have been responsible for. Otherwise, you should consider it as one jumping-off point for your own research to determine how you would like to deploy Discourse.

TL;DR

  • Install on an Ubuntu LTS release
  • Passwords are a bad choice for system access
  • Use a separate device and file system for /var/discourse/shared
  • Start with a two-container install
  • Add a mail reciever container
  • Set up an offline page with external nginx (or use a service)
  • Don’t rush to serve images via S3; it’s a one-way door
  • Keep all your external configuration backed up and in version control
  • Configure backup with thumbnails included
  • Set up streaming off-site uploads backup
  • Configure prometheus or sysstat for retrospective statistics

Ubuntu LTS

I’m a Fedora user. I was the first Fedora Project Lead at Red Hat, and I’d much rather run Discourse on top of Fedora and Podman because I opine that they do a better job of security overall than Ubuntu and Docker. However, Discourse is developed on top of Ubuntu and Docker, and you will be quite a pioneer if you try to run on top of anything else. Note that Fedora’s own Discourse instances are hosted by CDCK, presumably on Ubuntu.

Security

This section really has nothing to do with Discourse per se, but it’s part of my normal security practice. Don’t allow password-only shell access to any system on the network, including a VM running Discourse. Set up SSH-based access using a passphrase-encrypted SSH key, and configure the ssh server on your VM not to allow password access.

laptop$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/.../.ssh/id_rsa):
Enter passphrase (empty for no passphrase): SOME LONG PHRASE
Enter same passphrase again: SOME LONG PHRASE
Your identification has been saved in .ssh/id_rsa
Your public key has been saved in .ssh/id_rsa.pub

Linux distributions are normally set up to remember the passphrase in memory, so you only have to type it once per boot. Windows is not as convenient; you might consider using Pageant with PuTTY to do the same.

First, validate that incoming SSH works without a password. Only after doing that, on the server, modify the file /etc/ssh/sshd_config and find the PasswordAuthentication line. Set it to no to disable incoming password access.

PasswordAuthentication no

Separate device and file system

Make /var/discourse/shared a separate device with its own file system, with 20GB of space plus at least twice as much room as you need for images; add more space if you will be using prometheus. If the device will be easy to expand later (such as LVM or any cloud block storage like AWS elastic block storage) you can monitor and grow it as you need to; otherwise be generous at the start. If you are using a network storage block device, do not put a partition table on it. Using it without a partition table will make it easier to expand; you will not have to modify a partition table. In many cases you will be able to expand without any system downtime.

On Maker Forums, this is a network storage block device attached to the VM on which Maker Forums is running. On another Discourse forum, it is a Digital Ocean Block Storage Volume. In Amazon, this would be AWS Elastic Block Storage. On my test system running a KVM VM under libvirt on Fedora, it is an LVM volume on the Fedora host exported to the Ubuntu VM as a virtual disk. In each case, I could create a new VM, copy key files across to it, stop the old VM, attach the /var/discourse/shared volume to the new VM, and be back up and running in minutes. This makes operating system upgrades on the VM relatively low risk.

Make sure that you start out with at least 25GB on the root filesystem for your VM, not including any space for /var/discourse/shared. This will be used for all the docker containers, and the discourse launcher will fail if less than 5GB is free at any time. You want plenty of space available for system updates, too. If you don’t have enough disk space, this is hard to recover from.

In site configuration, do set force_https but heed the warnings. Set it up in test, before taking a Discourse site public.

Two-container installation

Start the configuration with two containers.

./discourse-setup --two-container

This makes required system downtime every few months quite short, rarely noticeable. It’s one to two minutes of actual downtime, but many users won’t notice at all if they don’t click or scroll past content during the outage. This makes it easier to apply most security updates; it’s just a blip rather than approximately 15 minutes of rebuilding everything. The following process works for most updates and gives approximately 90 seconds of downtime, depending primarily on the performance of the host system.

./launcher bootstrap app
./launcher destroy app && ./launcher start app
./launcher cleanup

This does mean that when you update Discourse, you have to also check whether to update the data container as well, but this is rarely required (typically expect once or twice per year). For more on knowing when to update data, see:

(In my own deployments, I personally chose to call the web_only container app both because it’s easier to type and because it makes most instructions easier to follow. This is non-standard but I keep appreciating the ease of use. However, it was extra work, and it works for me because I know what is going on. If that sounds bad to you, stick with the default web_only instead for a multi-container deployment.)

Mail receiver container

Set up a third container as a mail receiver. It ensures bounce processing, makes your bounce processing independent of outgoing mail provider, and gives you the option of reply-by-email.

Make sure you have SPF set up for trusting your email sender; minimally, a policy likev=spf1 +mx ~all if you send and receive through the same MX, but more specific may be better trusted as spam protection. Consider DKIM as well.

Terminate user SSL connections outside the container

External nginx

Use nginx running on the host system, not in a container, both to host a maintenance page, and to support IPv6 address logging if your host has IPv6 support. (Otherwise, all IPv6 connections will be logged as coming from an internal RFC1918 address associated with your local docker virtual network interface.) This configuration will present a temporary maintenance page during most maintenance operations that will eventually redirect back to the page a userwas looking at.

Make sure that certbot restarts nginx and the mail receiver container so that you do not end up with browsers or email blocking traffic with your site due to continuing to use an old, expired certificate.

# systemctl edit certbot

For a system without a mail receiver, I added the two lines:

[Service]
ExecStartPost=/bin/systemctl reload nginx

On a system where I’m using a separate mail-receiver container
that also shares the cert from the system:

[Service]
ExecStartPost=/bin/systemctl reload nginx
ExecStartPost=/bin/sh -c 'cd /var/discourse && ./launcher restart mail-receiver'

External service

I have not configured Fastly or Cloudflare in front of Discourse, but others have, and unlike external nginx running on the host, they can allow you to serve a maintenance page while the host system is entirely down, such as when rebooting during a system update on your host. If this is worthwhile to you, here’s how to do it:

Don’t rush to S3 uploads

Be very sure you always want to use S3 (or equivalent) for uploaded images before you enable enable_s3_uploads during setup, or migrate to it later. Be aware that using S3 (s3_endpoint) with its associated CDN (s3_cdn_url) for images will also result in serving javascript via that CDN. Migrating from S3 back to local storage is not supported and there are no concrete plans to implement it at this time. It’s a “one way door” that can’t even be undone by a full backup and restore. If you do use S3 or similar, don’t use Digital Ocean Spaces instead of S3. There are references here on meta to it not being reliable.

I moved my site to serving images through Digital Ocean Spaces and its associated CDN early on, and I had to write hundreds of lines of custom code to migrate back to local storage, doing minor damage to my Discourse instance in the process, due to the “one way door” not being well understood.

For more information:

You do not need to enable S3 uploads to use a CDN for your Discourse. Consider using an independent CDN (e.g. Cloudflare, CloudFront, Fastly, GCS CDN) in front of a Discourse that manages its own images. It is my second-hand understanding that the warning about Cloudflare not being recommended is due to “Rocket Loader” modifying JavaScript; and that at this time, as long as you don’t use “Rocker Loader” it functions correctly.

Backup

For system files, consider backing up at least:

  • /var/discourse/containers (for discourse configuration details)
  • /var/www (for error pages)
  • /etc/ssl (for letsencrypt config, to avoid having to bootstrap certbot as part of restoring a backup; otherwise you have to comment out the SSL portion of /etc/nginx/sites-available/default while you are bootstrapping; this works only if you keep the backups recent because the certificates have short validity)
  • /etc/systemd/system/backup-uploads.service (for doing images backups to S3)
  • /usr/local/bin/mc (minio-client as image backup tool, if you choose to use it)
  • /root/.mc (configuration for image backup with minio-client)
  • /root/.ssh (incoming SSH session authentication)

Some of these files you may back up by checking them into Git and pushing them somewhere off site. If the files you check into Git include secrets (like database passwords), definitely don’t push them to a public repository. Alternatively, you could script copying them off the system and checking them into Git on a supervisory system that you control. Script this sufficiently frequently to keep your backups of /etc/ssl fresh.

The goal is to have backups both in case of disaster and to have record of changes in case of mistake.

Discourse Configuration for backup

  • Back up thumbnails with include_thumbnails_in_backups. A restore without thumbnails takes a long time to regenerate them. If your site doesn’t have many graphics, the thumbnails take insignificant space. If your site is graphics-rich, regenerating thumbnails could take days. While thumbnails are being regenerated, email notifications will be disabled. Either way, it makes no sense to omit thumbnails from backups.

  • Do not include images in backups if you have lots of images. This will make backups slow and unwieldy. Back them up separately. If you back up images after your database backup, your backups will be consistent.

  • Arrange for backups to go off site somehow.

This page shows how to set up database backups to S3 or something like S3:

While database backups can be stored to S3, there is no S3 image backup separate from serving images from S3. An alternative is to use minio-client to copy images to any S3-like storage. This can be many S3-like targets, including S3 and minio, but not DigitalOcean Spaces because it is built on top of the Ceph filesystem which does not implement the ListObjectsV2 API the same way that S3 does.

In S3, create a bucket that blocks public access (PermissionsBlock public access is the easy way to get this right in AWS).

Install minio-client (mc) somehow. Here’s one way.

curl https://dl.min.io/client/mc/release/linux-amd64/mc > /usr/local/bin/mc

Configure minio-client with an alias called backup using a command something like this:

# mc alias set backup https://s3.amazonaws.com ACCESSKEY SECRETKEY --api S3v4
# mc mirror /var/discourse/shared/standalone/uploads backup/UPLOADS-BACKUP-BUCKET

Then create a service /etc/systemd/system/backup-uploads.service like this

[Unit]
Description=Neartime remote backup sync of discourse uploads
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=600
User=root
ExecStart=/usr/local/bin/mc mirror --overwrite -a --watch /var/discourse/shared/app/uploads backup/UPLOADS-BACKUP-BUCKET

[Install]
WantedBy=multi-user.target

Note that UPLOADS-BACKUP-BUCKET here should be a different bucket from the s3_backup_bucket into which you configure discourse to upload database backups. Also, note that the path will be /var/discourse/shared/web_only/uploads if you use the standard multi-container deployment.

# systemctl enable backup-uploads
# systemctl start backup-uploads
# journalctl -fu backup-uploads

Upload a test image and make sure you see lines for successfully backing up the original and optimized images. Control-C will exit follow mode in journalctl.

Recovery

I have never had to test this plan as of this writing. This summary might miss something.

  • Restore all backed up files generally
  • Start nginx (now your maintenance page will show)
  • Do a normal deployment of Discourse using the restored files in /var/discourse/containers
  • Install minio-client in /usr/local/bin/mc if you didn’t restore it from backups
  • If you did not back up /root/mc, set up the backup alias # mc alias set backup https://s3.amazonaws.com ACCESSKEY SECRETKEY --api S3v4
  • # mc cp backup/UPLOADS-BACKUP-BUCKET /var/discourse/shared/app/uploads
  • Restore the most recent database backup; I recommend that you Restore a backup from command line
  • Only after you have confirmed the site is operational, re-configure backing up uploads to S3 as documented above.

Streaming postgresql backups

In the future, I may create, test, and provide a configuration to enable using continuous WAL archiving to stream near-instantaneous Postgres backups with minio-client using the archive-command in postgresql, similar to streaming uploads backups.

Performance monitoring

Prometheus container

Set up prometheus, putting prometheus logs in /var/discourse/shared/prometheus if you are running it on the same system. Prometheus files can grow large, and you do not want them to fill up the root file system; you also probably want to bring them along if you move to a newer host system (either upgrading to a larger VM or a VM with a newer operating system installation).

If you deploy prometheus on the discourse system (or anywhere else on the public internet), configure security in front of it. Installed that way, one option would be nginx configuration like this:

  location /prometheus/ {
    auth_basic "Prometheus";
    auth_basic_user_file /etc/nginx/prometheus-htpasswd;
    proxy_pass http://localhost:9090/;
  }

Sysstat

If Prometheus is too much, consider using sysstat instead.

  • apt install sysstat
  • In /etc/cron.d/sysstat change 5-55/10 to */2
  • In /etc/default/sysstat change false to true

After this, the sar command can tell you if you are running out of resources from time to time.

Discourse settings for moderation

On any site where moderation is active, strongly consider the enable_whispers configuration that allows moderators and administrators to talk about a topic in line. Also, category moderators have been given more abilities in recent versions of Discourse. It is worth being aware of enable_category_group_moderation if you have experts in different topics with their own categories, or if you have functionally separate categories such as for support.

8 Likes