MKJの意見重視ディスコース展開設定

I have been running a Discourse forum with a substantial amount of content and plenty of images over the past few years. Maker Forums has over 100GB of images and over 400,000 posts, of which a substantial amount was imported, primarily from Google+, and the rest was created on the site. This post describes elements of how I eventually configured Maker Forums and, later, a few other Discourse instances. This is what I wish I knew when I got started, and which I’ve used to help others avoid some of the same pitfalls for their own Discourse instances.

Time for a wider audience.

:warning: Warning: If you are not comfortable working as a Linux systems administrator, this guide is probably not for you. I may not even be aware of all the ways that it presumes knowledge about Linux. If this feels enlightening to read, you may be the target audience. If it feels confusing to read, you are probably not the target audience. If this feels like work, please consider paying CDCK or @pfaffman to run Discourse for you; they know what they are doing. :warning:

:warning: As if that weren’t enough: I have more Linux expertise than Discourse expertise. My opinions come with no warranty. If trying to follow my advice causes anything of yours to break (your Discourse forum, your host system, or your heart) you get to keep both pieces, with all the sharp edges. I have no plans to provide any form of support for the content in this post. :warning:

I plan (but do not promise) to keep this document up to date with my practices covering the Discourse instances that I participate in maintaining. This is written in the form of advice, but I intend it primarily as advice to myself and to any administrators who inherit Discourse deployments that I have been responsible for. Otherwise, you should consider it as one jumping-off point for your own research to determine how you would like to deploy Discourse.

System Setup

Use a CentOS-derived or Ubuntu LTS OS. Anything that supports Docker can probably be made to work, but I’ve used those two.

Docker

I’m a Fedora user. I was the first Fedora Project Lead at Red Hat, and I’d much rather run Discourse on top of Podman because I opine that its security model is preferable to Docker’s. However, Discourse deployments are supported only on Docker, and you will be quite a pioneer if you try to run on top of anything else. (It may, someday, work with Podman using podman-compose if docker-compose is ever supported.)

Now that Docker supports cgroups v2, you can install the official Docker builds on a CentOS-derived system:

dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install --allowerasing docker-ce docker-ce-cli

(Include --alloweraseing because of a conflict with podman, runc, and buildah that may already be installed; they need to be erased to install docker.)

systemctl enable --now docker

I have tested this with AlmaLinux 9.

Security

This section really has nothing to do with Discourse per se, but it’s part of my normal security practice. Don’t allow password-only shell access to any system on the network, including a VM running Discourse. Set up SSH-based access using a passphrase-encrypted SSH key, and configure the ssh server on your VM not to allow password access.

laptop$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/.../.ssh/id_rsa):
Enter passphrase (empty for no passphrase): SOME LONG PHRASE
Enter same passphrase again: SOME LONG PHRASE
Your identification has been saved in .ssh/id_rsa
Your public key has been saved in .ssh/id_rsa.pub

Linux distributions are normally set up to remember the passphrase in memory, so you only have to type it once per boot. Windows is not as convenient; you might consider using Pageant with PuTTY to do the same.

First, validate that incoming SSH works without a password. Only after doing that, on the server, modify the file /etc/ssh/sshd_config and find the PasswordAuthentication line. Set it to no to disable incoming password access.

PasswordAuthentication no

Firewall

You will need to leave ports 80 and 443 generally open for letsencrypt to generate and renew your SSL certificates, even before your Discourse is open to the public.

If you are using firewalld, these commands will accomplish this:

firewall-cmd --add-service http --add-service https --zone public
firewall-cmd --runtime-to-permanent

Separate device and file system

Make /var/discourse/shared a separate device with its own file system, with 20GB of space plus at least twice as much room as you need for images; add more space if you will be using prometheus. If the device will be easy to expand later (such as LVM or any cloud block storage like AWS elastic block storage) you can monitor and grow it as you need to; otherwise be generous at the start. If you are using a network storage block device, do not put a partition table on it. Using it without a partition table will make it easier to expand; you will not have to modify a partition table. In many cases you will be able to expand without any system downtime.

On Maker Forums, this is a network storage block device attached to the VM on which Maker Forums is running. On another Discourse forum, it is a Digital Ocean Block Storage Volume. In Amazon, this would be AWS Elastic Block Storage. On my test system running a KVM VM under libvirt on Fedora, it is an LVM volume on the Fedora host exported to the AlmaLinux VM as a virtual disk. In each case, I could create a new VM, copy key files across to it, stop the old VM, attach the /var/discourse/shared volume to the new VM, and be back up and running in minutes. This makes operating system upgrades on the VM relatively low risk.

Make sure that you start out with at least 25GB on the root filesystem for your VM, not including any space for /var/discourse/shared. This will be used for all the docker containers, and the discourse launcher will fail if less than 5GB is free at any time. You want plenty of space available for system updates, too. If you don’t have enough disk space, this is hard to recover from.

In site configuration, do set force_https but heed the warnings. Set it up in test, before taking a Discourse site public. Note that even with force_https you need port 80 open, both to redirect to SSL on port 443 and to renew your letsencrypt SSL certificate. (However, if you are using cloudflare, use its feature instead; it is reported to be not compatible with force_https in Discourse.)

Kernel configuration

Redis (one of the key components on which Discourse is built) strongly recommends disabling transparent huge pages when using on-disk persistence (which Discourse does), and I also allow memory overcommit.

echo 'sys.kernel.mm.transparent_hugepage.enabled=never' > /etc/sysctl.d/10-huge-pages.conf
echo 'vm.overcommit_memory=1' > /etc/sysctl.d/90-vm_overcommit_memory.conf
sysctl --system

Discourse Installation

While the default installation is a single container, this makes every upgrade, recommended monthly, typically a 10-15 minute downtime if done from the command line, which is necessary for some updates, including those updating the tools on top of which Discourse is built, for security or new features, or when the live update from the UI fails for any reason. You can reduce that downtime in practice with the two-container installation.

Two-container installation

Start the configuration with two containers.

./discourse-setup --two-container

This makes required system downtime every few months quite short, rarely noticeable; many users won’t notice at all if they don’t click or scroll past content during the outage. This makes it easier to apply most security updates; it’s just a blip rather than approximately 15 minutes of rebuilding everything. The following process works for most updates and typically gives approximately 30-90 seconds of downtime, depending primarily on the performance of the host system and the set of plugins installed.

cd /var/discourse
git pull
./launcher bootstrap app
./launcher destroy app && ./launcher start app
./launcher cleanup

Do not delay between the bootstrap and the destroy/start invocations. Infrequently (maybe once or twice a year in practice), the database migrations done near the end of the bootstrap phase will cause more or less serious errors to present to users of the app, due to the older code accessing the updated database.

This does mean that when you update Discourse, you have to also check whether to update the data container as well, but this is rarely required (typically expect once or twice per year). Depending on the contents of your data and app containers and the speed of the system, this will typically result in a downtime between 5 and 20 minutes.

cd /var/discourse
git pull
./launcher stop app
./launcher rebuild data
./launcher rebuild app

For more on knowing when to update the data container, see:

(In my own deployments, I personally chose to call the web_only container app both because it’s easier to type and because it makes most instructions easier to follow. This is non-standard but I keep appreciating the ease of use. However, it was extra work, and it works for me because I know what is going on. If that sounds bad to you, stick with the default web_only instead for a multi-container deployment.)

Note that at some point in the future, Docker may force you to do a migration to a new configuration for connecting your containers:

If you have 4GB or more of memory, or multiple CPUs, read advice at:

Update Schedule

Watch the release-notes tag (Click on release-notes and click on the bell at the upper right; I use “Watching First Post”) and/or add https://meta.discourse.org/tag/release-notes.rss to your RSS feed to know when there are releases. Read the release notes before updating. If there is a database change, the release notes will mention it. They will also call out releases that contain security updates. Read all release notes even if you skip actually updating to some version; if you don’t read the release notes for a release that updates the database, you might miss database update instructions in the release notes you didn’t read.

Mail

Mail is still one of the key ways to keep connecting people. Set up outgoing and incoming mail to make mail work for you. If you have trouble see:

Keep reaching out

Maker Forums has seen occasional visitors who are gone for long stretches before returning. By default, Discourse stops sending digest emails after a year. Consider setting the suppress_digest_email_after_days to something longer than the default 365 days if you want to encourage occasional visitors to come back when they see something new and interesting. I made it substantially longer for Maker Forums to support occasional visitors keeping up to date. Reading digest emails is a valid way to “lurk” on a forum, and you never know when something is going to spark someone’s interest in contributing.

Similarly, by default, unprivileged users who haven’t interacted much (trust level 0 with no posts) are eventually deleted after 730 days of not logging in. Set “clean up inactive users after days” to 0 to disable deleting users, if you want them to be able to lurk reading digest emails indefinitely.

Consider adding the yearly review plugin which once per year, will generate a post like 2020: The Year in Review and ultimately email it to your inactive users which may encourage them to renew participation.

Mail receiver container

Set up a third container as a mail receiver. It ensures bounce processing, makes your bounce processing independent of outgoing mail provider, and gives you the option of reply-by-email.

Make sure you have SPF set up for trusting your email sender; minimally, a policy like v=spf1 +mx -all if you send and receive through the same MX, but more specific may be better trusted as spam protection. Consider DKIM as well.

If you use the same host name to receive email, you should really terminate SSL outside your container as for an “offline page” (see below) and will need to map your certbot certificates into the container and restart the container after running certbot.

Terminate user SSL connections outside the container

There are two choices for terminating SSL outside the container, either of which brings substantial advantages over terminating inside the container. Set up one of them after you have successfully completed discourse-setup and bootstrapped your forum.

External nginx

Use nginx running on the host system, rather than only in a container, both to host a maintenance page, and to support IPv6 address logging if your host has IPv6 support. (Otherwise, all IPv6 connections will be logged as coming from an internal RFC1918 address associated with your local docker virtual network interface.) This configuration will present a temporary maintenance page during most maintenance operations that will eventually redirect back to the page a user was looking at.

Note that the instructions on that page (currently) suggest installing a package called letsencrypt but it is now normally called certbot instead. If you follow the instructions on that page to use --certonly, you will not need the nginx plugin for certbot, but installing the nginx plugin is another mechanism. On CentOS derivatives that’s:

dnf config-manager --set-enabled crb
dnf install epel-release
dnf install certbot python3-certbot-nginx
systemctl enable --now certbot-renew.timer

Make sure that certbot restarts nginx and the mail receiver container so that you do not end up with browsers or email blocking traffic with your site due to continuing to use an old, expired certificate.

# systemctl edit certbot

For a system without a mail receiver, I added the two lines:

[Service]
ExecStartPost=/bin/systemctl reload nginx

On a system where I’m using a separate mail-receiver container that also shares the cert from the system:

[Service]
ExecStartPost=/bin/systemctl reload nginx
ExecStartPost=/bin/sh -c 'cd /var/discourse && ./launcher restart mail-receiver'

If you are using SELinux, the Ubuntu containers aren’t set up to label the nginx.http.sock file with httpd_sys_content_t for external nginx to be able to access it. You have two choices.

The first is to run nginx in permissive mode, removing SELinux protection for it: semanage permissive -a httpd_t

However, that removes SELinux protection from what is probably the most relevant service! To keep SELinux enabled, you will need to allow nginx to access the error pages and switch from proxying over a unix domain socket to a port (which is a few µs slower, but should not be noticeable to your users).

First, run these commands to allow nginx to access error pages:

semanage fcontext -a -t httpd_sys_content_t /var/www
restorecon -R -v /var/www

Then in your app.yaml, comment out or remove the - "templates/web.socketed.template.yml", expose port 80 as a different port on the local machine and rebuild the container.

expose:
  - "8008:80"   # http

Don’t use https here — you have terminated SSL in the external nginx, and the X-Forwarded-Proto header tells Discourse that the request came in via https. Make sure that port 8008 (or whatever other port you have chosen) is not exposed publicly by your firewall settings.

Then run this command to allow nginx to connect over the network to the containre:

setsebool -P httpd_can_network_connect 1

Then modify your external nginx configuration from proxying via nginx.http.sock to http://127.0.0.1:8008 (or your chosen port) and clear the default Connection: close header, so that the external nginx doesn’t have to establish a new IP connection for every request.

...
  location / {
    proxy_pass http://127.0.0.1:8008;
    proxy_set_header Host $http_host;
    proxy_http_version 1.1;
    # Disable default "Connection: close"
    proxy_set_header "Connection" "";
...

Removing web.socketed.template.yml also removed the real_ip invocation, so add that back. Make sure that the IP address range you use makes sense; Docker’s default is to use the 172.* RFC1918 address space. Add to your app.yml file something like this:

run:
  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /listen 80;/
     to: |
       listen unix:/shared/nginx.http.sock;
       set_real_ip_from 172.0.0.0/24;
  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /listen 443 ssl http2;/
     to: |
       listen unix:/shared/nginx.https.sock ssl http2;
       set_real_ip_from 172.0.0.0/24;

This is required for rate limiting to work correctly.

External service

I have not configured Fastly or Cloudflare in front of Discourse, but others have, and unlike external nginx running on the host, they can allow you to serve a maintenance page while the host system is entirely down, such as when rebooting during a system update on your host. If this is worthwhile to you, here’s how to do it:

Don’t rush to S3 uploads

Be very sure you always want to use S3 (or equivalent) for uploaded images before you enable enable_s3_uploads during setup, or migrate to it later. Be aware that using S3 (s3_endpoint) with its associated CDN (s3_cdn_url) for images will also result in serving javascript via that CDN. Migrating from S3 back to local storage is not supported and there are no concrete plans to implement it at this time. It’s a “one way door” that can’t even be undone by a full backup and restore. If you do use S3 or similar, don’t use Digital Ocean Spaces instead of S3. There are references here on meta to it not being reliable.

I moved my site to serving images through Digital Ocean Spaces and its associated CDN early on, and I had to write hundreds of lines of custom code to migrate back to local storage, doing minor damage to my Discourse instance in the process, due to the “one way door” not being well understood.

For more information:

You do not need to enable S3 uploads to use a CDN for your Discourse. Consider using an independent CDN (e.g. Cloudflare, CloudFront, Fastly, GCS CDN) in front of a Discourse that manages its own images. It is my second-hand understanding that the warning about Cloudflare not being recommended is due to “Rocket Loader” modifying JavaScript; and that at this time, as long as you don’t use “Rocket Loader” it functions correctly.

Discourse settings for moderation

On any site where moderation is active, strongly consider the enable_whispers configuration that allows moderators and administrators to talk about a topic in line. Also, category moderators have been given more abilities in recent versions of Discourse. It is worth being aware of enable_category_group_moderation if you have experts in different topics with their own categories, or if you have functionally separate categories such as for support.

Geolocation can be helpful when trying to understand whether an account is legitimate.

The Discourse Templates plugin is really helpful for moderators. It lets you collaborate on common responses. We have a few dozen at Maker Forums. It has more features than the prior “Canned Responses” plugin that it replaces.

The User Notes plugin will help moderators share notes about users. You can put these to use for things like:

  • “Keep an eye on this user, they may be malicious because …”
  • “While this behavior seems suspect, I have validated that this is a legitimate user by …”
  • “I’m already having a conversation with this user to address concerns, other moderators don’t need to pile on.”

Information Accessibility

The Discourse Solved plugin not only marks solved problems so that site visitors can identify them more easily, but I understand might also prioritize google search results.

Public information is more accessible than private information. On Maker Forums, our FAQ strongly discourages personal messages and reminds everyone that personal messages are not truly private. However, by default, users may see the message:

You’ve replied to user 3 times, did you know you could send them a personal message instead?

If you really want to encourage users to go to personal messages, I suggest that you go to Admin → Customize → Text and change the get_a_room template to fix the comma splice.

If, like Maker Forums, you want to keep conversation in public to benefit everyone, Admin → Settings → Other → get_a_room_threshold can be set higher, like 1000000.

Similarly, if you have a forum providing help, the max_replies_in_first_day default 10 might push new users in a conversation asking for help into personal messages when they use up their budget of replies. Consider increasing this setting to avoid pushing conversations into personal messages.

Connect users, build a community

A few plugins can help connect users to each other.

If your forum doesn’t have too many simultaneous users, consider the Who’s Online plugin to give people more of a sense of connection. You might want to limit display to logged-in users, possibly only those who have reached at least trust level 1. You can use it only to add presence flair (whos_online_avatar_indicator) to avatars by setting whose_online_minimum_display very high and whos_online_hide_below_minimum_display true. This can be useful for support forums to support and encourage quick question and answer while helping users resolve problems.

However, a sense of presence can cut both ways. A user who is online at a different time from the majority of forum users might feel lonely, or the forum might feel like a “ghost town” to them.

If you have users in many countries and want them to have hints about when each other are more likely to be available, consider the National Flags plugin, and encourage users to set a national flag in their profile.

A tricky one is translation. It would be convenient to help people communicate when they don’t speak the same language, but currently (as of this writing) there are no translation services with free tiers of service. If you choose to pay for translation services, you can enable translation with the Discourse Translator plugin.

Backup

For system files, consider backing up at least:

  • /var/discourse/containers (for discourse configuration details)
  • /var/www (for error pages)
  • /etc/ssl (for letsencrypt config, to avoid having to bootstrap certbot as part of restoring a backup; otherwise you have to comment out the SSL portion of your nginx configuration while you are bootstrapping; this works only if you keep the backups recent because the certificates have short validity)
  • /etc/systemd/system/backup-uploads.service (for doing images backups to S3)
  • /usr/local/bin/mc (minio-client as image backup tool, if you choose to use it)
  • /root/.mc (configuration for image backup with minio-client)
  • /root/.ssh (incoming SSH session authentication)

Some of these files you may back up by checking them into Git and pushing them somewhere off site. If the files you check into Git include secrets (like database passwords), definitely don’t push them to a public repository. Alternatively, you could script copying them off the system and checking them into Git on a supervisory system that you control. Script this sufficiently frequently to keep your backups of /etc/ssl fresh.

The goal is to have backups both in case of disaster and to have record of changes in case of mistake.

A better alternative for most of these files is to keep the canonical copies elsewhere, and use a tool like Ansible to maintain the configuration on the system, which makes it just as easy to update after a backup. But if you are going to do that, you probably figured it out without me telling you!

Discourse Configuration for backup

  • Back up thumbnails with include_thumbnails_in_backups. A restore without thumbnails takes a long time to regenerate them. If your site doesn’t have many graphics, the thumbnails take insignificant space. If your site is graphics-rich, regenerating thumbnails could take days. While thumbnails are being regenerated, email notifications will be disabled. Either way, it makes no sense to omit thumbnails from backups.

  • Do not include images in backups if you have lots of images. This will make backups slow and unwieldy. Back them up separately. If you back up images after your database backup, your backups will be consistent.

  • Arrange for backups to go off site somehow.

This page shows how to set up database backups to S3 or something like S3:

While database backups can be stored to S3, there is no S3 image backup separate from serving images from S3. An alternative is to use minio-client to copy images to any S3-like storage. This can be many S3-like targets, including S3 and minio, but not DigitalOcean Spaces because it is built on top of the Ceph filesystem which does not implement the ListObjectsV2 API the same way that S3 does.

In S3, create a bucket that blocks public access (PermissionsBlock public access is the easy way to get this right in AWS).

Install minio-client (mc) somehow. Here’s one way.

curl https://dl.min.io/client/mc/release/linux-amd64/mc > /usr/local/bin/mc && chmod +x /usr/local/bin/mc

Configure minio-client with an alias called backup using a command something like this:

# mc alias set backup https://s3.amazonaws.com ACCESSKEY SECRETKEY --api S3v4
# mc mirror /var/discourse/shared/standalone/uploads backup/UPLOADS-BACKUP-BUCKET

Then create a service /etc/systemd/system/backup-uploads.service like this

[Unit]
Description=Neartime remote backup sync of discourse uploads
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=600
User=root
ExecStart=/usr/local/bin/mc mirror --overwrite -a --watch /var/discourse/shared/app/uploads backup/UPLOADS-BACKUP-BUCKET

[Install]
WantedBy=multi-user.target

Note that UPLOADS-BACKUP-BUCKET here should be a different bucket from the s3_backup_bucket into which you configure discourse to upload database backups. Also, note that the path will be /var/discourse/shared/web_only/uploads if you use the standard multi-container deployment.

# systemctl enable backup-uploads
# systemctl start backup-uploads
# journalctl -fu backup-uploads

Upload a test image and make sure you see lines for successfully backing up the original and optimized images. Control-C will exit follow mode in journalctl.

Recovery

I have never had to test this plan as of this writing. This summary might miss something.

  • Restore all backed up files generally
  • Start nginx (now your maintenance page will show)
  • Do a normal deployment of Discourse using the restored files in /var/discourse/containers
  • Install minio-client in /usr/local/bin/mc if you didn’t restore it from backups
  • If you did not back up /root/mc, set up the backup alias # mc alias set backup https://s3.amazonaws.com ACCESSKEY SECRETKEY --api S3v4
  • # mc cp backup/UPLOADS-BACKUP-BUCKET /var/discourse/shared/app/uploads
  • Restore the most recent database backup; I recommend that you Restore a backup from the command line
  • Only after you have confirmed the site is operational, re-configure backing up uploads to S3 as documented above.

Streaming postgresql backups

In the future, I may create, test, and provide a configuration to enable using continuous WAL archiving to stream near-instantaneous Postgres backups with minio-client using the archive-command in postgresql, similar to streaming uploads backups.

Performance monitoring

There are at least two approaches to performance monitoring.

Prometheus container

Set up prometheus, putting prometheus logs in /var/discourse/shared/prometheus if you are running it on the same system. Prometheus files can grow large, and you do not want them to fill up the root file system; you also probably want to bring them along if you move to a newer host system (either upgrading to a larger VM or a VM with a newer operating system installation).

If you deploy prometheus on the discourse system (or anywhere else on the public internet), configure security in front of it. Installed that way, one option would be nginx configuration like this:

  location /prometheus/ {
    auth_basic "Prometheus";
    auth_basic_user_file /etc/nginx/prometheus-htpasswd;
    proxy_pass http://localhost:9090/;
  }

Sysstat

If Prometheus is too much, consider using sysstat instead.

  • dnf install sysstat (or apt install sysstat on debian and derivatives)
  • systemctl enable --now sysstat
  • systemctl enable --now sysstat-collect.timer
  • systemctl enable --now sysstat-summary.timer
  • In /etc/cron.d/sysstat change 5-55/10 to */2
  • In /etc/default/sysstat change false to true

After this, the sar command can tell you if you are running out of resources from time to time.

Other Resources

Here’s a complementary (and more compact) discussion of using Discourse internally as a primary form of internal communications.

「いいね!」 52

Hi, thank you for this howto

What are your current CPU (cores) and RAM?
What are your current settings:

  db_shared_buffers: "xGB"
  db_work_mem: "xMB"
  UNICORN_WORKERS:

2 vCPUs (L5640 Xeon), 4GB RAM — but thanks to the massive import we have larger content per simultaneous user than typical Discourse that has been built entirely organically. We rarely have more than 5 simultaneous users.

I have not currently set db_work_mem in data.yml but it looks like it’s set to 10MB in /etc/postgresql/13/main/postgresql.conf. In my data.yml I have set db_shared_buffers: "768MB" but now I see that /etc/postgresql/13/main/postgresql.conf in my data container says shared_buffers = 512MB which surprises me. Apparently I didn’t rebuild my data container after my last change. :roll_eyes: I made the configuration change before adding prometheus (using more memory) so before I change that I will probably move prometheus off that server instance.

In the app, I have set UNICORN_WORKERS: 4

「いいね!」 1

@mcdanlj、たくさんの良い情報をありがとうございます。定期的なメンテナンスについて何かアドバイスはありますか?例えば、週次/月次での再起動や、アップ/ダウン監視と自動再起動などです。その他、定期的な手動または自動メンテナンスはありますか?

「いいね!」 1

@jaffadog リリースノートは #release-notes タグ(右上にあるベルのアイコンです。「最初の投稿を監視」を使用しています)を監視するか、RSSフィードに https://meta.discourse.org/tag/release-notes.rss を追加して、リリース時に通知を受け取るようにしてください。アプリのリリースは通常月1回で、月1回の再起動をカバーしています。定期的な再起動はそれ以上の頻度では行っていません。また:

letsencryptとmail_receiverを使用している場合は、新しい証明書を取得した後にmail_receiverを再起動するように設定することをお勧めします。

現時点で思いつくのは以上です。ご質問ありがとうございます。リリースノートの監視方法と再起動に関する詳細を本文に盛り込みました。これで元の投稿で完全にカバーされていると思います。

ベースイメージのバージョンを変更するために /var/discourse でアップデートを取得するようにアップデート手順を拡張しました。これは、Update base image for polkit vulnerability を確認したところ、この手順を明示しないと誤解を招く可能性があることに気づいたためです。以前は、これはベースラインドキュメントの一部として考えていました。launcher スクリプトにはベースイメージバージョンへの特定の参照が含まれており、git pull を実行するまで古いベースイメージの上にビルドすることになり、テスト済みのものを実行することにはなりません。(ファイルの先頭近くの image= を探してください。)

「いいね!」 1

それよりも悪いのは、デフォルトで非アクティブなレベル 0 のアカウントを一定期間後に削除するように設定されていることです。私の場合は、まったく望んでいなかったことです!「非アクティブなユーザーを日数後にクリーンアップする」を確認し、ゼロまたは非常に大きな数に設定してください。

「いいね!」 2

それを確認しましたが、私の理解では、彼らが「メールアドレスを確認してください」というフローに応答しなかったため、そもそもメールを受け取っていなかったということになります。「非アクティブ」が「サイトにログインしていない」という意味ではないと思いますが、私が間違っているかどうか知りたいです。

いいえ、「非アクティブ」はここでは active=false を意味しません。

以下のすべてに該当するユーザーを意味します。

  • トラストレベル 0
  • 投稿なし
  • 管理者でもモデレーターでもない
  • X日以上前に最後に確認された。

そしてはい、その表現は確かに紛らわしいですが、設定でいくらか説明されています(「投稿のないトラストレベル 0」)。

「いいね!」 7

実際には、異なる設定を混同しており、「未使用のステージング済みユーザーを日数後にクリーンアップする」ことを念頭に置いていました。Maker Forumsでは「非アクティブなユーザーを日数後にクリーンアップする」をずっと前にゼロに設定しており、ここで言及するのを忘れていました。一般に関心のある可能性のあるものを探していたときに、変更されたサイト設定の監査で見落としていたようです。@Ed_Sと@RGJのお二人ともありがとうございます!さらに、潜伏を有効にすることに関する段落を投稿に追加しました。:smiling_face:

「いいね!」 3

Hi @mcdanlj、素晴らしい洞察を共有していただきありがとうございます。2コンテナのインストールに関して、アップグレード実行時のダウンタイム削減にどのように役立つのかが分かりません。私のテストインストールでは、GUIの/admin/upgrade#/upgrade/allインターフェース経由でのアップグレードには数分かかりますが、プロセス全体を通してサイトはユーザーが操作可能です。

コマンドラインから 2 つのコンテナのインストールを再構築する必要がある場合、古いコンテナが引き続き実行されている間に新しいイメージをブートストラップできます。

「いいね!」 2

いつものように、@pfaffman は私よりも素早く、自分のことをよく知っています。:smiley:

私はGUIからアップグレードすることは決してありません。そうすれば、更新するたびに、Discourseが実行されているコンテナの下にあるシステムセキュリティアップデートも含まれるようになります。これはGUIの使用が悪いということではなく、他の人全員への推奨でもありません。GUIのアップデートはダウンタイムがわずかに短くなります(Webコンテナを再起動するとサービスが一時的に停止します。これは、外部nginxと組み合わせて検討するいくつかの理由の1つです)。したがって、これはトレードオフであり、私はあまり一般的でない道を選びました。

シングルコンテナのインストールでは、Discourseが新しいPostgreSQLの機能を利用するため、セキュリティアップデートや時折のデータベースバージョンのアップデートを定期的に行う場合、ダウンタイムが長くなることが多くなります。実際のデータを確認せずに、私の直感では、年に3〜4回再構築する理由があると考えています。そのダウンタイム量があなたの観点から問題ない場合、2コンテナ展開の複雑さを引き受ける理由はあまりありません。

「いいね!」 4

それはとても親切ですが、これはあなたの意見についてです。 :wink:

私もです。ダッシュボード以外は、コンテナにたくさんの追加機能(Ansibleや、正確には覚えていませんが、その他もろもろ)があり、dashboard.literatecomputing.com を使用してアップグレードを実行している人がいて、そのコンテナを削除すると、再構築が終了してしまう可能性があり、問題が発生する可能性があります。そのため、最近は docker_manager のアップグレードをいくつか行っていますが、非常にスムーズです。

実際にはそうではありません。新しいベースイメージがある場合、docker_manager はそれを取得するように強制します(少なくとも、試みます)。

だいたいそんなところです。参考までに、お勧めしませんが、何年も問題なくゼロアップグレードを行った人をたくさん知っています。

はい、コンテナ内アップデートがうまく実装されていることは間違いありません!

はい、それが私が言いたかったことです。:tada:

「いいね!」 1

こんにちは、またお二人とも洞察をありがとうございます。それで、本番環境のセットアップをどうするかまだ決めかねています。理論的には、2コンテナインストールがダウンタイムを減らす理由を理解しています。しかし、GUIアップデートメカニズムでは、まだほとんどダウンタイムが発生していません。今、時間を計ってみましたが、docker-managerのアップデートがあり、Discourseは22コミット遅れていました。プロセス全体で5分もかからず、プロセス全体を通してフォーラムは完全に稼働していました。確かに今回はPostgreSQLのアップデートはありませんでしたが、もしそれもアップデートする必要があった場合、2コンテナ方式でもダウンタイムが最大になるということですよね?ですから、私の理解が正しければ、2コンテナ方式は、SSHログインとアプリコンテナの再構築が必要な場合にのみダウンタイムを削減するということですか(プラグインの追加/削除など)?本番環境の設定を頻繁に変更することは想定していません。そのため、私のケースではダウンタイムの削減の可能性はないと考えています。それとも、特定の種類の機能/セキュリティアップデートを適用するために、SSHでログインしてコンテナを再構築する必要もありますか?

問題ありません。

はい、PostgreSQL を更新するたびに(おそらく1〜2年に一度、さらにPostgreSQL自体のセキュリティアップデートもですが、それほど頻繁ではありませんでした)、完全なダウンタイムが発生します。

変更する前は、GUIの更新で数回失敗しましたが、詳細はもう覚えていません。昔の話です。

GUIの更新ではベースイメージは更新されないため、画像処理などの提供されているソフトウェアのすべてのセキュリティアップデートは適用されません。

私は、アプリコンテナのセキュリティアップデートが利用可能になったときに、それを消費するための「通常」モードとして、アプリコンテナの再構築を高速に行うことを好みます。そのため、すべてを再構築するための10分間のダウンタイムを延期することはありません。だからこそ、ランチャーでの git pull は、私がすべてのアップデートを適用する際の最初の部分に含まれています。これにより、画像処理プログラムなどのベースイメージが更新された場合、セキュリティアップデートを適用する必要があるかどうかを考える必要もなく、それらが適用されます。:smiling_face:

しかし、最終的には、私 個人的 には、2コンテナアプローチの方がシンプルだと感じています。そして、私は 絶対に 他の人にそれを勧めているわけではありません。あなたの知識と経験に基づいて、それがよりシンプルに思えないのであれば、私の個人的な意見に基づいたガイドが特定のコンテキストで役立つと特定しているからといって、それを実行しないでください。:grin:

「いいね!」 1

了解しました! :wink: テックの実装方法についても強い意見を持つ傾向がありますが、あなたの追加的な視点に感謝します。

うーん、これはまだそうなのでしょうか?

いずれにしても、コンテナ化せずに LAMP/LEMP スタックにインストールされた従来のフォーラムでの私の経験では、実際のウェブサイトが侵害される典型的な脆弱性/攻撃ベクトルは、ほぼ常にウェブアプリのコードまたはそのウェブ開発フレームワークのいずれかにあります。そのため、Discourse のコードベースのアップデートに対する緊急性がより高く感じられ、これは GUI によって処理されているように見えるため、そちらに傾いているのかもしれません。


ところで、ダウンタイムの件ですが、Clear Linux を少し宣伝させてください。純粋な数値計算における低レベルの最適化のためにテストを開始し、巨大なフォーラムのインポートプロセスから数時間短縮しようとしました。そのケースでは確かに速度向上の可能性があるかもしれませんが、一般的に、聖なるサバ、特に KVM ゲストとしての再起動は_クレイジーに速い_です。最も安価なティアの VPS では、再起動して 5 秒もかからずに SSH にログインし直すことができます。したがって、重要なホスト OS のアップデートがある場合は、それを使用することを楽しみにしています。

はい。しかし、コンテナ内のコンポーネントが更新されたため、年に数回はコマンドラインで再構築する必要があります。その場合、5〜15分間のダウンタイムが発生します。私は(ダッシュボードプラグインを1日に数回更新する可能性のあるダッシュボードを除き)ほとんどコマンドラインからのみアップグレードを実行しており、それらのコマンドラインアップデートが必要な顧客も多数います(おそらく彼らはWebインターフェイスからアップデートを実行しているのでしょうが、そうでないこともよくあります)。

「いいね!」 2

Discourseダッシュボードは、そのような必須の更新について具体的に通知しますか?それともDebianのPSAに注意を払うべきですか?