在没有 Docker 的情况下部署 Discourse

Though it is more convenient and safer to deploy Discourse following the official install guide, I want to dive deeper into the container and see how it can be deployed in Linux without Docker. I want to share the steps just for your information. You adapt it and use it at your own risk.

Dive into how Discourse is run in container

I take a look at the output of ./launcher start-cmd webonly:

true run --shm-size=512m --link data:data -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production … --name webonly -t -v /var/discourse/shared/webonly:/shared … local_discourse/webonly /sbin/boot

then look at /sbin/boot and /etc/service/unicorn/run, I get the core cmd to start Discourse:

LD_PRELOAD=$RUBY_ALLOCATOR HOME=/home/discourse USER=discourse exec thpoff chpst -u discourse:www-data -U discourse:www-data bundle exec config/unicorn_launcher -E production -c config/unicorn.conf.rb

prepare system

FYI, I use Ubuntu 24.04 and zsh.

Follow PG’s offical installation guide to install postgres from PostgreSQL Apt Repository. I installed the version 18, which works quite well, though the offical install uses 15 at time of writing.

Install redis(8.2 at time of writing, official install uses 7.0), nginx and create dedicated user discourse:

apt install nginx libnginx-mod-http-brotli-static redis zsh zsh-autosuggestions zsh-syntax-highlighting
systemctl enable --now postgresql redis nginx
useradd -m -s /bin/zsh discourse

Install ImageMagick 7 (I use IMEI) and check the version. Mine is:

magick --version
Version: ImageMagick 7.1.2-3 Q16-HDRI

then, change user(su - discourse) and install pnpm, rvm.

curl -fsSL https://get.pnpm.io/install.sh | zsh -
curl -sSL https://get.rvm.io | bash

then adapt and add the following config to your .zshrc

/home/discourse/.zshrc

# pnpm
export PNPM_HOME="/home/discourse/.local/share/pnpm"
case ":$PATH:" in
  *":$PNPM_HOME:"*) ;;
  *) export PATH="$PNPM_HOME:$PATH" ;;
esac
# pnpm end
alias npm='pnpm'
alias npx='pnpx'

# Add RVM to PATH for scripting. Make sure this is the last PATH variable change.
export PATH="$PATH:$HOME/.rvm/bin"

export ALLOW_EMBER_CLI_PROXY_BYPASS=1

export RAILS_ENV=production

export UNICORN_SIDEKIQ_MAX_RSS=1000
export UNICORN_WORKERS=4
export UNICORN_SIDEKIQS=1

export PUMA_SIDEKIQ_MAX_RSS=1000
export PUMA_WORKERS=4
export PUMA_SIDEKIQS=1

#export RUBY_YJIT_ENABLE=1
#export RUBY_CONFIGURE_OPTS="--enable-yjit"
export DISCOURSE_HOSTNAME=example.com
export DISCOURSE_DEVELOPER_EMAILS=discourse-admin@example.com

export DISCOURSE_MAXMIND_ACCOUNT_ID=<id>
export DISCOURSE_MAXMIND_LICENSE_KEY=<key>

export DISCOURSE_ENABLE_CORS=true
export DISCOURSE_MAX_REQS_PER_IP_MODE=none
export DISCOURSE_MAX_REQS_PER_IP_PER_MINUTE=20000
export DISCOURSE_MAX_REQS_PER_IP_PER_10_SECONDS=5000
export DISCOURSE_MAX_ASSET_REQS_PER_IP_PER_10_SECONDS=20000
export DISCOURSE_MAX_REQS_RATE_LIMIT_ON_PRIVATE=false
export DISCOURSE_MAX_USER_API_REQS_PER_MINUTE=200
export DISCOURSE_MAX_USER_API_REQS_PER_DAY=28800
export DISCOURSE_MAX_ADMIN_API_REQS_PER_MINUTE=600
export DISCOURSE_MAX_DATA_EXPLORER_API_REQ_MODE=none

export DISCOURSE_MAX_REQS_PER_IP_EXCEPTIONS="127.0.0.1 ::1"
cd /var/www/discourse

logout and login again as discourse for .zshrc to take effect.

Install node and ruby:

pnpm env use --global latest # will install node 24.9 at time of writing. offical install uses 22
rvm get master
rvm install 3.4 # will install ruby 3.4.6 at time of writing. official install uses 3.3
rvm use 3.4 --default

prepare db (and restore a backup)

sudo -u postgres createuser -s discourse                                                   
sudo -u postgres createdb discourse 

$sudo -u postgres psql discourse
psql>
ALTER USER discourse WITH PASSWORD 'xxx';
CREATE EXTENSION hstore;CREATE EXTENSION pg_trgm;
CREATE EXTENSION plpgsql;
CREATE EXTENSION unaccent;
CREATE EXTENSION vector;
# to restore database extracted from a backup:
$ gunzip < dump.sql.gz | psql discourse

To restore backup, you also need to copy the public and plugins folder as well.

install Discourse

I consulted discourse_docker/templates/web.template.yml at 20e33fbfd98d3b8d9c57f7a111beff8aa51a5b98 · discourse/discourse_docker · GitHub

as user root:

cd /var/www/
git clone https://github.com/discourse/discourse
mkdir -p /var/www/discourse/public
chown -R discourse:discourse /var/www/discourse/     
chown -R discourse:www-data /var/www/discourse/public

configure config/discourse.conf:

config/discourse.conf
max_data_explorer_api_req_mode = 'none'
max_user_api_reqs_per_day = '28800'
hostname = '127.0.0.1'
hostname = 'example.com'
redis_host = '127.0.0.1'
db_password = '<password>'
db_socket = ''
max_reqs_per_ip_per_10_seconds = '5000'
max_asset_reqs_per_ip_per_10_seconds = '20000'
max_reqs_rate_limit_on_private = 'false'
developer_emails = 'discourse-admin@example.com'
max_user_api_reqs_per_minute = '200'
maxmind_license_key = '<key>'
maxmind_account_id = '<id>'
max_reqs_per_ip_per_minute = '20000'
db_host = '127.0.0.1'
enable_cors = 'true'
db_port = ''
max_reqs_per_ip_mode = 'none'
max_admin_api_reqs_per_minute = '600'

smtp_user_name = '<name>'
smtp_address = 'postal.example.com'
smtp_port = '25'
smtp_password = '<password>'
smtp_domain = 'postalsend.example.com'
notification_email = 'noreply@postalsend.example.com'

Do the bundle / pnpm install, db migration, assets precompile stuff. This is also how you upgrade discourse and plugins.

as user discourse:

cd /var/www/discourse
git stash
git pull
git checkout tests-passed 
cd plugins
for plugin in *
do
    echo $plugin; cd ${plugin}; git pull; cd ..
done
cd ../
sed -i '/gem "rails_multisite"/i gem "rails"' Gemfile
bundle install --jobs $(($(nproc) - 1))
pnpm i
bundle exec rake db:migrate
bundle exec rake themes:update
bundle exec rake assets:precompile

I don’t want to use unicorn. Heroku recommends using the Puma web server instead of Unicorn. Here is my config/puma.rb written after consulting config/unicorn.conf.rb:

config/puma.rb
# frozen_string_literal: true

require "fileutils"
#require 'puma/acme'

discourse_path = File.expand_path(File.expand_path(File.dirname(__FILE__)) + "/../")

enable_logstash_logger = ENV["ENABLE_LOGSTASH_LOGGER"] == "1"
puma_stderr_path = "#{discourse_path}/log/puma.stderr.log"
puma_stdout_path = "#{discourse_path}/log/puma.stdout.log"

# Load logstash logger if enabled
if enable_logstash_logger
  require_relative "../lib/discourse_logstash_logger"
  FileUtils.touch(puma_stderr_path) if !File.exist?(puma_stderr_path)
  # Note: You may need to adapt the logger initialization for Puma
  log_formatter =
    proc do |severity, time, progname, msg|
      event = {
        "@timestamp" => Time.now.utc,
        "message" => msg,
        "severity" => severity,
        "type" => "puma",
      }
      "#{event.to_json}\n"
    end
else
  stdout_redirect puma_stdout_path, puma_stderr_path, true
end

# Number of workers (processes)
workers ENV.fetch("PUMA_WORKERS", 6).to_i

# Set the directory
directory discourse_path

# Bind to the specified address and port
bind ENV.fetch(
       "PUMA_BIND",
       "tcp://#{ENV["PUMA_BIND_ALL"] ? "" : "127.0.0.1:"}#{ENV.fetch("PUMA_PORT", 3000)}",
     )
#bind 'tcp://0.0.0.0:80'
#plugin :acme
#acme_server_name 'example.com'
#acme_tos_agreed true
#bind 'acme://0.0.0.0:443'

# PID file location
FileUtils.mkdir_p("#{discourse_path}/tmp/pids")
pidfile ENV.fetch("PUMA_PID_PATH", "#{discourse_path}/tmp/pids/puma.pid")

# State file - used by pumactl
state_path "#{discourse_path}/tmp/pids/puma.state"

# Environment-specific configuration
if ENV["RAILS_ENV"] == "production"
  # Production timeout
  worker_timeout 30
else
  # Development timeout
  worker_timeout ENV.fetch("PUMA_TIMEOUT", 60).to_i
end

# Preload application
preload_app!

# Handle worker boot and shutdown
before_fork do
  Discourse.preload_rails!
  Discourse.before_fork

  # Supervisor check
  supervisor_pid = ENV["PUMA_SUPERVISOR_PID"].to_i
  if supervisor_pid > 0
    Thread.new do
      loop do
        unless File.exist?("/proc/#{supervisor_pid}")
          puts "Kill self supervisor is gone"
          Process.kill "TERM", Process.pid
        end
        sleep 2
      end
    end
  end

  # Sidekiq workers
  sidekiqs = ENV["PUMA_SIDEKIQS"].to_i
  if sidekiqs > 0
    puts "starting #{sidekiqs} supervised sidekiqs"

    require "demon/sidekiq"
    Demon::Sidekiq.after_fork { DiscourseEvent.trigger(:sidekiq_fork_started) }
    Demon::Sidekiq.start(sidekiqs)

    if Discourse.enable_sidekiq_logging?
      Signal.trap("USR1") do
        # Delay Sidekiq log reopening
        sleep 1
        Demon::Sidekiq.kill("USR2")
      end
    end
  end

  # Email sync demon
  if ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
    puts "starting up EmailSync demon"
    Demon::EmailSync.start(1)
  end

  # Plugin demons
  DiscoursePluginRegistry.demon_processes.each do |demon_class|
    puts "starting #{demon_class.prefix} demon"
    demon_class.start(1)
  end

  # Demon monitoring thread
  Thread.new do
    loop do
      begin
        sleep 60

        if sidekiqs > 0
          Demon::Sidekiq.ensure_running
          Demon::Sidekiq.heartbeat_check
          Demon::Sidekiq.rss_memory_check
        end

        if ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
          Demon::EmailSync.ensure_running
          Demon::EmailSync.check_email_sync_heartbeat
        end

        DiscoursePluginRegistry.demon_processes.each(&:ensure_running)
      rescue => e
        Rails.logger.warn(
          "Error in demon processes heartbeat check: #{e}\n#{e.backtrace.join("\n")}",
        )
      end
    end
  end

  # Close Redis connection
  Discourse.redis.close
end

on_worker_boot do
  DiscourseEvent.trigger(:web_fork_started)
  Discourse.after_fork
end

# Worker timeout handling
worker_timeout 30

# Low-level worker options
threads 8, 32

To run Discourse, run puma -C config/puma.rb

Using systemd, you can run it on boot and restart it on failure. Here is the service file:

/etc/systemd/system/discourse.service
[Unit]
Description=Discourse with Puma Server
After=network.target postgresql.service
Requires=postgresql.service

[Service]
Type=simple
User=discourse
Group=discourse
WorkingDirectory=/var/www/discourse
# requires running `rvm 3.4.6 --default` before this service is run
ExecStart=/usr/bin/zsh -lc 'source /home/discourse/.zshrc && /home/discourse/.rvm/gems/ruby-3.4.6/bin/puma -C config/puma.rb'
ExecReload=/usr/bin/zsh -lc 'source /home/discourse/.zshrc && /home/discourse/.rvm/gems/ruby-3.4.6/bin/pumactl restart'

# Restart configuration
Restart=always
RestartSec=5s

# Basic security measures
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=read-only

[Install]
WantedBy=multi-user.target

Now the puma server listens on 127.0.0.1:3000. Adapt the nginx config file from Docker:

/etc/nginx/sites-enabled/discourse.conf
# Additional MIME types that you'd like nginx to handle go in here
types {
    text/csv csv;
    #application/wasm wasm;
}

upstream discourse { server 127.0.0.1:3000; }

# inactive means we keep stuff around for 1440m minutes regardless of last access (1 week)
# levels means it is a 2 deep hierarchy cause we can have lots of files
# max_size limits the size of the cache
proxy_cache_path /var/nginx/cache inactive=1440m levels=1:2 keys_zone=one:10m max_size=600m;

# Increased from the default value to acommodate large cookies during oAuth2 flows
# like in https://meta.discourse.org/t/x/74060 and large CSP and Link (preload) headers
proxy_buffer_size 32k;
proxy_buffers 4 32k;

# Increased from the default value to allow for a large volume of cookies in request headers
# Discourse itself tries to minimise cookie size, but we cannot control other cookies set by other tools on the same domain.
large_client_header_buffers 4 32k;

# attempt to preserve the proto, must be in http context
map $http_x_forwarded_proto $thescheme {
  default $scheme;
  "~https$" https;
}

log_format log_discourse '[$time_local] "$http_host" $remote_addr "$request" "$http_user_agent" "$sent_http_x_discourse_route" $status $bytes_sent "$http_referer" $upstream_response_time $request_time "$upstream_http_x_discourse_username" "$upstream_http_x_discourse_trackview" "$upstream_http_x_queue_time" "$upstream_http_x_redis_calls" "$upstream_http_x_redis_time" "$upstream_http_x_sql_calls" "$upstream_http_x_sql_time"';

# Allow bypass cache from localhost
#geo $bypass_cache {
#  default         0;
#  127.0.0.1       1;
#  ::1             1;
#}

limit_req_zone $binary_remote_addr zone=flood:10m rate=12r/s;
limit_req_zone $binary_remote_addr zone=bot:10m rate=200r/m;
limit_req_status 429;
limit_conn_zone $binary_remote_addr zone=connperip:10m;
limit_conn_status 429;
server {
  access_log /var/log/nginx/access.log log_discourse;
  
  #listen unix:/var/nginx/nginx.http.sock;
  listen 443 ssl;
  listen [::]:443 ssl;
  server_name example.com;
  ssl_certificate /etc/nginx/ssl/example.com.cer;
  ssl_certificate_key /etc/nginx/ssl/example.com.key;
  ssl_protocols       TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
  ssl_ciphers         HIGH:!aNULL:!MD5;

  set_real_ip_from unix:;
  set_real_ip_from  127.0.0.1/32;
  set_real_ip_from  ::1/128;
  real_ip_header    X-Forwarded-For;
  real_ip_recursive on;

  gzip on;
  gzip_vary on;
  gzip_min_length 1000;
  gzip_comp_level 5;
  gzip_types application/json text/css text/javascript application/x-javascript application/javascript image/svg+xml application/wasm;
  gzip_proxied any;

  # Uncomment and configure this section for HTTPS support
  # NOTE: Put your ssl cert in your main nginx config directory (/etc/nginx)
  #
  # rewrite ^/(.*) https://enter.your.web.hostname.here/$1 permanent;
  #
  # listen 443 ssl;
  # ssl_certificate your-hostname-cert.pem;
  # ssl_certificate_key your-hostname-cert.key;
  # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
  # ssl_ciphers HIGH:!aNULL:!MD5;
  #

  server_tokens off;

  sendfile on;


  keepalive_timeout 65;

  # maximum file upload size (keep up to date when changing the corresponding site setting)
  client_max_body_size 128m ;


  # path to discourse's public directory
  set $public /var/www/discourse/public;

  # without weak etags we get zero benefit from etags on dynamically compressed content
  # further more etags are based on the file in nginx not sha of data
  # use dates, it solves the problem fine even cross server
  etag off;

  # prevent direct download of backups
  location ^~ /backups/ {
    internal;
  }

  # bypass rails stack with a cheap 204 for favicon.ico requests
  location /favicon.ico {
    return 204;
    access_log off;
    log_not_found off;
  }

  location / {
    root $public;
    add_header ETag "";

    # auth_basic on;
    # auth_basic_user_file /etc/nginx/htpasswd;

    location ~ ^/uploads/short-url/ {
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;
      proxy_pass http://discourse;
      break;
    }

    location ~ ^/(secure-media-uploads/|secure-uploads)/ {
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;
      proxy_pass http://discourse;
      break;
    }

    location ~* (fonts|assets|plugins|uploads)/.*\.(eot|ttf|woff|woff2|ico|otf)$ {
      expires 1y;
      add_header Cache-Control public,immutable;
      add_header Access-Control-Allow-Origin *;
    }

    location = /srv/status {
      access_log off;
      log_not_found off;
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;
      proxy_pass http://discourse;
      break;
    }

    # some minimal caching here so we don't keep asking
    # longer term we should increase probably to 1y
    location ~ ^/javascripts/ {
      expires 1d;
      add_header Cache-Control public,immutable;
      add_header Access-Control-Allow-Origin *;
    }

    location ~ ^/assets/(?<asset_path>.+)$ {
      expires 1y;
      # asset pipeline enables this
      brotli_static on;
      gzip_static on;
      add_header Cache-Control public,immutable;
      # HOOK in asset location (used for extensibility)
      # TODO I don't think this break is needed, it just breaks out of rewrite
      break;
    }

    location ~ ^/plugins/ {
      expires 1y;
      add_header Cache-Control public,immutable;
      add_header Access-Control-Allow-Origin *;
    }

    # cache emojis
    location ~ /images/emoji/ {
      expires 1y;
      add_header Cache-Control public,immutable;
      add_header Access-Control-Allow-Origin *;
    }

    location ~ ^/uploads/ {

      # NOTE: it is really annoying that we can't just define headers
      # at the top level and inherit.
      #
      # proxy_set_header DOES NOT inherit, by design, we must repeat it,
      # otherwise headers are not set correctly
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;
      proxy_set_header X-Sendfile-Type X-Accel-Redirect;
      proxy_set_header X-Accel-Mapping $public/=/downloads/;
      expires 1y;
      add_header Cache-Control public,immutable;

      ## optional upload anti-hotlinking rules
      #valid_referers none blocked mysite.com *.mysite.com;
      #if ($invalid_referer) { return 403; }

      # custom CSS
      location ~ /stylesheet-cache/ {
          add_header Access-Control-Allow-Origin *;
          try_files $uri =404;
      }
      # this allows us to bypass rails
      location ~* \.(gif|png|jpg|jpeg|bmp|tif|tiff|ico|webp|avif)$ {
          add_header Access-Control-Allow-Origin *;
          try_files $uri =404;
      }
      # SVG needs an extra header attached
      location ~* \.(svg)$ {
      }
      # thumbnails & optimized images
      location ~ /_?optimized/ {
          add_header Access-Control-Allow-Origin *;
          try_files $uri =404;
      }

      proxy_pass http://discourse;
      break;
    }

    location ~ ^/admin/backups/ {
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;
      proxy_set_header X-Sendfile-Type X-Accel-Redirect;
      proxy_set_header X-Accel-Mapping $public/=/downloads/;
      proxy_pass http://discourse;
      break;
    }

    # This big block is needed so we can selectively enable
    # acceleration for backups, avatars, sprites and so on.
    # see note about repetition above
    location ~ ^/(svg-sprite/|letter_avatar/|letter_avatar_proxy/|user_avatar|highlight-js|stylesheets|theme-javascripts|favicon/proxied|service-worker) {
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;

      # if Set-Cookie is in the response nothing gets cached
      # this is double bad cause we are not passing last modified in
      proxy_ignore_headers "Set-Cookie";
      proxy_hide_header "Set-Cookie";
      proxy_hide_header "X-Discourse-Username";
      proxy_hide_header "X-Runtime";

      # note x-accel-redirect can not be used with proxy_cache
      proxy_cache one;
      proxy_cache_key "$scheme,$host,$request_uri";
      proxy_cache_valid 200 301 302 7d;
      #proxy_cache_bypass $bypass_cache;
      proxy_pass http://discourse;
      break;
    }

    # we need buffering off for message bus
    location /message-bus/ {
      proxy_set_header X-Request-Start "t=${msec}";
      proxy_set_header Host $http_host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $thescheme;
      proxy_http_version 1.1;
      proxy_buffering off;
      proxy_pass http://discourse;
      break;
    }

    # this means every file in public is tried first
    try_files $uri @discourse;
  }

  location /downloads/ {
    internal;
    alias $public/;
  }

  location @discourse {
  limit_conn connperip 20;
  limit_req zone=flood burst=12 nodelay;
  limit_req zone=bot burst=100 nodelay;
    proxy_set_header Host $http_host;
    proxy_set_header X-Request-Start "t=${msec}";
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $thescheme;
    proxy_pass http://discourse;
  }

}

Now your Discourse can be accessed from example.com:443.

maintenance

To access rails console, simply run rails c as user discourse in /var/www/discourse. The discourse cmd which can be found in official doc is basically bundle exec script/discourse.

To upgrade Discourse, consult #upgrade-cmd, then restart puma using either puma restart or puma phased-restart. For the difference, consult puma/docs/restart.md at master · puma/puma · GitHub .

8 个赞

这应该移到 Community wiki > Sysadmins 吗?

你好。感谢分享。

我正在为 Debian 12 的 lxc 容器编写一个安装脚本。它已接近完成并且运行良好。准备好后我会发布。

我已经成功显示了管理员注册的第一个页面。但是确认电子邮件被发送到 discourse@myhostname,并且 smtp_server 是 myhostname,这太荒谬了。.bashrc(或.zshrc)或discourse.conf中的变量未被用于发送电子邮件。开发人员的电子邮件地址是正确的,但所有其他参数都是错误的,我无法更改它们。您有什么想法知道如何实现这一点吗?

1 个赞

根据代码

SMTP 应在 config/discourse.conf 中配置。
对我来说,它有这些行:

smtp_user_name = '...com'
smtp_address = '...com'
smtp_port = '587'
smtp_password = '...'
smtp_domain = '...com'
notification_email = 'noreply@....com'

您是否检查过日志?对于这个自定义安装,日志位于 log 目录下的 production.log、production.log、puma.stdout.log 中。

非常感谢您的回复。

我已经按照您所写的内容在 config/discourse.conf 中进行了设置,只是使用了 zsh。
我的日志中没有关于 smtp 的信息,唯一的日志(production.log)是:

Started GET “/” for 192.168.1.14 at 2025-09-15 17:12:51 +0000
Processing by FinishInstallationController#index as HTML
Rendered layout layouts/finish_installation.html.erb (Duration: 57.8ms | GC: 1.0ms)
Completed 200 OK in 164ms (Views: 61.7ms | ActiveRecord: 0.0ms (0 queries, 0 cached) | GC: 8.7ms)
Started GET “/finish-installation/register” for 192.168.1.14 at 2025-09-15 17:12:53 +0000
Processing by FinishInstallationController#register as HTML
Rendered layout layouts/finish_installation.html.erb (Duration: 63.8ms | GC: 1.5ms)
Completed 200 OK in 166ms (Views: 68.0ms | ActiveRecord: 0.0ms (0 queries, 0 cached) | GC: 5.4ms)
Started POST “/finish-installation/register” for 192.168.1.14 at 2025-09-15 17:12:54 +0000
Processing by FinishInstallationController#register as HTML
Parameters: {“authenticity_token”=>“U9_0mqt8iE5Y_jdNSV5uZxOgz9rspJbEsohs0jU8QTOPaOXdyaG-oLSFYtn9dQ2-mdHYvCzjFsRaqzp6YlNzbQ”, “email”=>“webmaster@domain.app”, “username”=>“ioio”, “password”=>“[FILTERED]”, “commit”=>“Register”}
Redirected to http://myhostname/finish-installation/confirm-email
Completed 302 Found in 140ms (ActiveRecord: 0.0ms (0 queries, 0 cached) | GC: 4.4ms)
Started GET “/finish-installation/confirm-email” for 192.168.1.14 at 2025-09-15 17:12:54 +0000
Processing by FinishInstallationController#confirm_email as HTML
Rendered layout layouts/finish_installation.html.erb (Duration: 62.1ms | GC: 1.5ms)

我的邮件服务器日志中没有显示来自 discourse 服务器的任何条目(它对我的所有其他服务器(所有 lxc 容器)都正常工作)。我可以通过 mail 终端命令成功发送邮件。

使用 puma -C config/puma.rb

Use ‘before_worker_boot’, ‘on_worker_boot’ is deprecated and will be removed in v8
[498] Puma starting in cluster mode…
[498] * Puma version: 7.0.0 (“Romantic Warrior”)
[498] * Ruby version: ruby 3.3.9 (2025-07-24 revision f5c772fc7c) [x86_64-linux]
[498] * Min threads: 8
[498] * Max threads: 32
[498] * Environment: production
[498] * Master PID: 498
[498] * Workers: 8
[498] * Restarts: (:check_mark:) hot (:multiply:) phased (:multiply:) refork
[498] * Preloading application
[498] * Listening on http://127.0.0.1:3000
[498] ! WARNING: Detected 2 Thread(s) started in app boot:
[498] ! #<Thread:0x00007f43cec88b38 /home/discourse/.rvm/gems/ruby-3.3.9/gems/message_bus-4.4.1/lib/message_bus.rb:738 sleep> - /home/discourse/.rvm/gems/ruby-3.3.9/gems/redis-client-0.25.2/lib/redis_client/ruby_connection/buffered_io.rb:213:in wait_readable' [498] ! #<Thread:0x00007f43cec887f0 /home/discourse/.rvm/gems/ruby-3.3.9/gems/message_bus-4.4.1/lib/message_bus/timer_thread.rb:38 sleep> - /home/discourse/.rvm/gems/ruby-3.3.9/gems/message_bus-4.4.1/lib/message_bus/timer_thread.rb:130:in sleep’
[498] Use Ctrl-C to stop

我的 discourse.conf:

max_data_explorer_api_req_mode = ‘none’
max_user_api_reqs_per_day = ‘28800’
hostname = ‘xxxxxxxxxxxxxxxxx.xxxx.app’
redis_host = ‘localhost’
smtp_user_name = ‘xxxxx@xxxxx.app’
db_password = ‘password’
smtp_address = ‘mail.xxxxx.app’
db_socket = ‘’
max_reqs_per_ip_per_10_seconds = ‘5000’
max_asset_reqs_per_ip_per_10_seconds = ‘20000’
max_reqs_rate_limit_on_private = ‘false’
developer_emails = ‘webmaster@xxx.app’
max_user_api_reqs_per_minute = ‘200’
maxmind_license_key = ‘’
smtp_port = ‘465’
maxmind_account_id = ‘50’
smtp_password = ‘xxxxxx’
max_reqs_per_ip_per_minute = ‘20000’
notification_email = ‘no-reply-discourse@xxx.app’
db_host = ‘localhost’
enable_cors = ‘true’
db_port = ‘’
max_reqs_per_ip_mode = ‘none’
smtp_domain = ‘xxx.app’
max_admin_api_reqs_per_minute = ‘600’

.bashrc 就像你的 .zhrc
修改 developer_emails 或 db_password 条目有效(网站管理员注册页面会显示正确的电子邮件),但其他 smtp 参数被忽略了。

您的 config/puma.rb 文件包含一些错误(大约在端口 3000 的那一行)。您能再提供一次吗?

我在 rails c 中注册了管理员,之后在起始页上看到了“哎呀……”页面。没有页面能正确渲染。

请帮帮我。

@lion ,试试这个:

# frozen_string_literal: true

require "fileutils"

discourse_path = File.expand_path(File.expand_path(File.dirname(__FILE__)) + "/../")

enable_logstash_logger = ENV["ENABLE_LOGSTASH_LOGGER"] == "1"
puma_stderr_path = "#{discourse_path}/log/puma.stderr.log"
puma_stdout_path = "#{discourse_path}/log/puma.stdout.log"

# 加载 logstash 日志记录器(如果已启用)
if enable_logstash_logger
  require_relative "../lib/discourse_logstash_logger"
  FileUtils.touch(puma_stderr_path) if !File.exist?(puma_stderr_path)
  # 注意:您可能需要调整 Puma 的日志记录器初始化
  log_formatter = proc do |severity, time, progname, msg|
    event = {
      "@timestamp" => Time.now.utc,
      "message" => msg,
      "severity" => severity,
      "type" => "puma"
    }
    "#{event.to_json}\n"
  end
else
  stdout_redirect puma_stdout_path, puma_stderr_path, true
end

# 工作进程(进程)数量
workers ENV.fetch("PUMA_WORKERS", 8).to_i

# 设置目录
directory discourse_path

# 绑定到指定的地址和端口
bind ENV.fetch("PUMA_BIND", "tcp://#{ENV['PUMA_BIND_ALL'] ? '' : '127.0.0.1:'}3000")

# PID 文件位置
FileUtils.mkdir_p("#{discourse_path}/tmp/pids")
pidfile ENV.fetch("PUMA_PID_PATH", "#{discourse_path}/tmp/pids/puma.pid")

# 状态文件 - 由 pumactl 使用
state_path "#{discourse_path}/tmp/pids/puma.state"

# 特定于环境的配置
if ENV["RAILS_ENV"] == "production"
  # 生产超时
  worker_timeout 30
else
  # 开发超时
  worker_timeout ENV.fetch("PUMA_TIMEOUT", 60).to_i
end

# 预加载应用程序
preload_app!

# 处理工作进程启动和关闭
before_fork do
  Discourse.preload_rails!
  Discourse.before_fork

  # 监控器检查
  supervisor_pid = ENV["PUMA_SUPERVISOR_PID"].to_i
  if supervisor_pid > 0
    Thread.new do
      loop do
        unless File.exist?("/proc/#{supervisor_pid}")
          puts "Kill self supervisor is gone"
          Process.kill "TERM", Process.pid
        end
        sleep 2
      end
    end
  end

  # Sidekiq 工作进程
  sidekiqs = ENV["PUMA_SIDEKIQS"].to_i
  if sidekiqs > 0
    puts "starting #{sidekiqs} supervised sidekiqs"

    require "demon/sidekiq"
    Demon::Sidekiq.after_fork { DiscourseEvent.trigger(:sidekiq_fork_started) }
    Demon::Sidekiq.start(sidekiqs)

    if Discourse.enable_sidekiq_logging?
      Signal.trap("USR1") do
        # 延迟 Sidekiq 日志重新打开
        sleep 1
        Demon::Sidekiq.kill("USR2")
      end
    end
  end

  # 邮件同步守护进程
  if ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
    puts "starting up EmailSync demon"
    Demon::EmailSync.start(1)
  end

  # 插件守护进程
  DiscoursePluginRegistry.demon_processes.each do |demon_class|
    puts "starting #{demon_class.prefix} demon"
    demon_class.start(1)
  end

  # 守护进程监控线程
  Thread.new do
    loop do
      begin
        sleep 60

        if sidekiqs > 0
          Demon::Sidekiq.ensure_running
          Demon::Sidekiq.heartbeat_check
          Demon::Sidekiq.rss_memory_check
        end

        if ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
          Demon::EmailSync.ensure_running
          Demon::EmailSync.check_email_sync_heartbeat
        end

        DiscoursePluginRegistry.demon_processes.each(&:ensure_running)
      rescue => e
        Rails.logger.warn("Error in demon processes heartbeat check: #{e}\n#{e.backtrace.join("\n")}")
      end
    end
  end

  # 关闭 Redis 连接
  Discourse.redis.close
end

on_worker_boot do
  DiscourseEvent.trigger(:web_fork_started)
  Discourse.after_fork
end

# 工作进程超时处理
worker_timeout 30

# 低级工作进程选项
threads 8, 32

谢谢。我忘了说它是在 haproxy 后面用于多台服务器。

  1. 我从浏览器的控制台中收到“混合内容”。我可以在 nginx 文件中更改什么?
  2. 我在启动 puma 命令时收到有关 magick 的日志。它虽然一开始就已安装。

==
/var/www/discourse/log/puma.stderr.log
<==
=== puma 启动:2025-09-19 01:40:45 +0200 ===
未知 OID 16720:无法识别“embeddings”的类型。它将被视为字符串。
#<Thread:0x00007fc59a2f5a78 /var/www/discourse/lib/discourse.rb:1190 run> 已终止,并出现异常(report_on_exception 为 true):
/var/www/discourse/lib/letter_avatar.rb:112:in ``‘: 没有那个文件或目录 - magick (Errno::ENOENT)
from /var/www/discourse/lib/letter_avatar.rb:112:in image_magick_version' from /var/www/discourse/lib/discourse.rb:1190:in block in preload_rails!’
我读到这可能是导致页面不显示的原因。

您是否在 Discourse 设置中强制使用 SSL?

您能以 discourse 用户身份运行 magick --version 吗?我的输出是:
版本:ImageMagick 7.1.1-45 Q16-HDRI x86_64 3cbce5696:20250308 https://imagemagick.org
版权:(C) 1999 ImageMagick Studio LLC
许可证:ImageMagick | License
特性:Cipher DPC HDRI Modules OpenMP(4.5)
委托(内置):bzlib cairo djvu fftw fontconfig freetype gslib gvc heic jbig jng jp2 jpeg jxl lcms lqr ltdl lzma openexr pangocairo png ps raqm raw rsvg tiff webp wmf x xml zip zlib zstd
编译器:gcc (13.3)

是的,这正是我所想的。

在“手动”安装时,我发现了很多未满足的依赖项。我正在编写一个 Ansible 来在多个环境中部署 discourse。

我的系统是 almalinux/redhat,但我会在认为差不多完成时分享我发现的内容。

1 个赞

haproxy.cfg 中 443 端口使用 crt 进行加密,并通过 8080 重定向到容器:

backend BACKEND_XXX
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
server xxx 192.168.1.48:8080 weight 1 # 如果 nginx discourse 上的 ssl 开启,则添加 “check ssl verify none”

  1. 在 nginx/…/discourse.conf 中,我复制了此帖子的版本,并修改了端口、servername 和证书文件位置。
    => PB1:内容混合,日志出现与 magick 相同的问题
    => PB2:在“/confirm-email”页面,浏览器显示“找不到站点”错误。

  2. 然后我修改了文件,移除了端口上的 ssl,并注释了所有包含“$thescheme”的行,以便仅在 haproxy 中加密。
    => 第一个页面出现 502 错误,日志与 magick 相同。

  3. 然后我从源代码(7.1.2.3)安装了 magick,并将二进制文件夹添加到 root 和 discourse 用户的 .bashrc 的 PATH 中。使用 apt install 安装无效(没有效果)。
    => 502 错误消失,但内容仍然混合,并且浏览器显示“找不到站点”。
    => 日志仍然显示找不到 magick,但有所不同。

== /var/www/discourse/log/puma.stderr.log ==
=== puma startup: 2025-09-19 12:06:05 +0200 ===
unknown OID 16720: failed to recognize type of ‘embeddings’. It will be treated as String.
#<Thread:0x00007f611a21c0f0 /var/www/discourse/lib/discourse.rb:1190 run> terminated with exception (report_on_exception is true):
/var/www/discourse/lib/letter_avatar.rb:112:in ``‘: No such file or directory - magick (Errno::ENOENT)
from /var/www/discourse/lib/letter_avatar.rb:112:in image_magick_version' from /var/www/discourse/lib/discourse.rb:1190:in block in preload_rails!’
== /var/www/discourse/log/puma.stdout.log ==
[8967] WARNING hook before_fork failed with exception (Errno::ENOENT) No such file or directory - magick

结论:
如何让 puma 找到 magick?
如何因为 haproxy 而在 nginx 中禁用 ssl?

我已经按照第一个帖子中的描述使其工作,但在 nginx 上没有 ssl,通过复制 nginx.config.sample,只更改了主机名和端口。
因此,无论是使用 ssl 还是不使用 ssl 在 nginx 上,我都会看到管理员注册的第一个页面,直到重发邮件页面,但两者都:

  1. 显示“混合内容”(例如图片文件);
  2. 管理员注册以外的其他页面显示“哎呀……”;
  3. 没有发送确认邮件;
  4. ImageMagick 仍然找不到。

您在之前的某篇帖子中提到了容器。

您是否在所涉及的任何部分使用了容器?

今天我想在预生产环境中完成我的部署,也许我会有更多信息可以分享。

和你一样 :unamused_face:

  • 未发送确认电子邮件(我正在使用 mailtrap,我确定它能正常工作)。
  • 使用 rake 创建了管理员用户
  • 之后在 / 上出现错误…
  • 还遇到了 imagemagick 问题,并且 imagemagick 已安装(我很确定缺少一些 rubygem…)

我正在使用 docker 进行部署,然后进行比较。我讨厌 docker :expressionless_face: 但我将浪费很多宝贵的时间…

我几乎不知道 Docker(如果知道的话!)并且我不需要知道它来在生产环境中运行 Discourse 论坛。只需按照支持的说明操作,它就能满足你的需求。

是的,我相信你。但我需要知道我正在部署什么以及它是如何工作的。

这是查找和解决问题的最佳方法😝

因为你会遇到问题,永远,也许现在不会,但我敢肯定,未来的你会找到某人。

无论如何,由于我无法通过其他安装方法获得支持,我将遵循官方说明并尝试帮助他人😜

我也花了很多时间处理这些问题。
关于 Docker,看看 Docker 的这个问题(是错误还是后门?…),目前已修复:https://youtu.be/dTqxNc1MVLE
这也是我为什么不使用 Docker 的原因之一。
我正在使用 lxd (lxc) 容器,并且感觉很好。我将首先在 lxc 容器上安装 Docker,然后在不使用 Docker 的 lxc 上安装成为可能时导出数据库。

我没有强制 Discourse 使用 SSL,因为我想让我的 haproxy 来处理,因为 haproxy 的直通(passthrough)模式在 HTTP 协议下无法重定向 GET 请求,而且我需要处理多个网站,所以需要在 haproxy 中使用 HTTP 协议,这就要求在 haproxy 端处理 SSL。我想避免双重 SSL 网关。
所以 haproxy(在一个 LXC 上)监听 443 端口:重定向到 8080 端口(无 SSL)到我的 discourse 容器(LXC)。
奇怪的是,discourse 文件夹中提供的 nginx-config-sample 是在没有 SSL 和端口 80 的情况下配置的,所以它应该能正常工作,但我遇到了上面提到的问题。

你需要这样做才能告诉 Discourse 只发送 https 链接。这并不是改变重定向,而是改变了它们的需求。

你需要开启强制 https 设置。

HTTPS:是的,但请告诉我如何启用它?

天大的好消息!我已解决问题!!!根源是 magick 程序。在从源代码版本 7 正确安装 magick 后(不要使用 apt 安装 imagemagick,它是版本 6),我的 discourse 网站可以正确显示所有页面!电子邮件、管理员注册等都可以正常工作!在解决了“混合内容”问题后,我将发布我的脚本,用于在 haproxy 后面的 lxd-lxc 容器(Debian 12)上进行安装(在另一个 lxd-lxc 容器上)。

1 个赞