Though it is more convenient and safer to deploy Discourse following the official install guide, I want to dive deeper into the container and see how it can be deployed in Linux without Docker. I want to share the steps just for your information. You adapt it and use it at your own risk.
Dive into how Discourse is run in container
I take a look at the output of ./launcher start-cmd webonly
:
true run --shm-size=512m --link data:data -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production … --name webonly -t -v /var/discourse/shared/webonly:/shared … local_discourse/webonly /sbin/boot
then look at /sbin/boot
and /etc/service/unicorn/run
, I get the core cmd to start Discourse:
LD_PRELOAD=$RUBY_ALLOCATOR HOME=/home/discourse USER=discourse exec thpoff chpst -u discourse:www-data -U discourse:www-data bundle exec config/unicorn_launcher -E production -c config/unicorn.conf.rb
prepare system
FYI, I use Ubuntu 24.04 and zsh
.
Follow PG’s offical installation guide to install postgres from PostgreSQL Apt Repository. I installed the version 17, which works quite well. Though the offical docker uses 15 as of today.
Install redis
, nginx
and create dedicated user discourse
:
apt install nginx libnginx-mod-http-brotli-static redis zsh zsh-autosuggestions zsh-syntax-highlighting
systemctl enable --now postgresql redis nginx
useradd -m -s /bin/zsh discourse
then, change user(su - discourse
) and install pnpm, rvm.
curl -fsSL https://get.pnpm.io/install.sh | zsh -
curl -sSL https://get.rvm.io | bash
then adapt and add the following config to your .zshrc
/home/discourse/.zshrc
export RAILS_ENV=production
export UNICORN_SIDEKIQ_MAX_RSS=800
export UNICORN_WORKERS=8
export UNICORN_SIDEKIQS=1
export PUMA_SIDEKIQ_MAX_RSS=800
export PUMA_WORKERS=8
export PUMA_SIDEKIQS=1
export RUBY_GC_HEAP_GROWTH_MAX_SLOTS=40000
export RUBY_GC_HEAP_INIT_SLOTS=400000
export RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.5
export DISCOURSE_HOSTNAME=example.com
export DISCOURSE_DEVELOPER_EMAILS=example@example.com
export DISCOURSE_SMTP_ADDRESS=example.com
export DISCOURSE_SMTP_PORT=587/yours
export DISCOURSE_SMTP_USER_NAME=example
export DISCOURSE_SMTP_PASSWORD=example
export DISCOURSE_SMTP_DOMAIN=example.com
export DISCOURSE_NOTIFICATION_EMAIL=example.com
export DISCOURSE_MAXMIND_ACCOUNT_ID=867/yours
export DISCOURSE_MAXMIND_LICENSE_KEY=F77V/yours
export DISCOURSE_ENABLE_CORS=true
export DISCOURSE_MAX_REQS_PER_IP_MODE=none
export DISCOURSE_MAX_REQS_PER_IP_PER_MINUTE=20000
export DISCOURSE_MAX_REQS_PER_IP_PER_10_SECONDS=5000
export DISCOURSE_MAX_ASSET_REQS_PER_IP_PER_10_SECONDS=20000
export DISCOURSE_MAX_REQS_RATE_LIMIT_ON_PRIVATE=false
export DISCOURSE_MAX_USER_API_REQS_PER_MINUTE=200
export DISCOURSE_MAX_USER_API_REQS_PER_DAY=28800
export DISCOURSE_MAX_ADMIN_API_REQS_PER_MINUTE=600
export DISCOURSE_MAX_DATA_EXPLORER_API_REQ_MODE=none
logout and login again as discourse
for .zshrc to take effect.
Install node and ruby:
pnpm env use --global lts # will install node 22 as of today
rvm install 3.3 # will install node 3.3.7 as of today
rvm use 3.3 --default
rvm 3.3.7 --default
prepare db (and restore a backup)
sudo -u postgres createuser -s discourse
sudo -u postgres createdb discourse
$sudo -u postgres psql discourse
psql>
ALTER USER discourse WITH PASSWORD 'xxx';
CREATE EXTENSION hstore;CREATE EXTENSION pg_trgm;
CREATE EXTENSION plpgsql;
CREATE EXTENSION unaccent;
CREATE EXTENSION vector;
# to restore database extracted from a backup:
$ gunzip < dump.sql.gz | psql discourse
For restoring a backup, you need to copy the public
and plugins
folder as well.
install Discourse
as user root
:
cd /var/www/
git clone https://github.com/discourse/discourse
mkdir -p /var/www/discourse/public
chown -R discourse:discourse /var/www/discourse/
chown -R discourse:www-data /var/www/discourse/public
configure config/discourse.conf:
config/discourse.conf
max_data_explorer_api_req_mode = 'none'
max_user_api_reqs_per_day = '28800'
hostname = 'example.com'
redis_host = 'localhost'
smtp_user_name = 'example'
db_password = 'example'
smtp_address = 'example'
db_socket = ''
max_reqs_per_ip_per_10_seconds = '5000'
max_asset_reqs_per_ip_per_10_seconds = '20000'
max_reqs_rate_limit_on_private = 'false'
developer_emails = 'example@example.com'
max_user_api_reqs_per_minute = '200'
maxmind_license_key = 'example'
smtp_port = '587'
maxmind_account_id = '867/yours'
smtp_password = 'example'
max_reqs_per_ip_per_minute = '20000'
notification_email = 'example@example.com'
db_host = 'localhost'
enable_cors = 'true'
db_port = ''
max_reqs_per_ip_mode = 'none'
smtp_domain = 'example.com'
max_admin_api_reqs_per_minute = '600'
Do the bundle / pnpm install, db migration, assets precompile stuff. This is also how you upgrade discourse and plugins.
as user discourse
:
cd /var/www/discourse
git stash
git pull
git checkout tests-passed
cd plugins
for plugin in *
do
echo $plugin; cd ${plugin}; git pull; cd ..
done
cd ../
sed -i '/gem "rails_multisite"/i gem "rails"' Gemfile
bundle install
pnpm i
bundle exec rake db:migrate
bundle exec rake themes:update
bundle exec rake assets:precompile
I don’t want to use unicorn
. Heroku recommends using the Puma web server instead of Unicorn. Here is my config/puma.rb
written after consulting config/unicorn.conf.rb
:
config/puma.rb
# frozen_string_literal: true
require "fileutils"
discourse_path = File.expand_path(File.expand_path(File.dirname(__FILE__)) + "/../")
enable_logstash_logger = ENV["ENABLE_LOGSTASH_LOGGER"] == "1"
puma_stderr_path = "#{discourse_path}/log/puma.stderr.log"
puma_stdout_path = "#{discourse_path}/log/puma.stdout.log"
# Load logstash logger if enabled
if enable_logstash_logger
require_relative "../lib/discourse_logstash_logger"
FileUtils.touch(puma_stderr_path) if !File.exist?(puma_stderr_path)
# Note: You may need to adapt the logger initialization for Puma
log_formatter = proc do |severity, time, progname, msg|
event = {
"@timestamp" => Time.now.utc,
"message" => msg,
"severity" => severity,
"type" => "puma"
}
"#{event.to_json}\n"
end
else
stdout_redirect puma_stdout_path, puma_stderr_path, true
end
# Number of workers (processes)
workers ENV.fetch("PUMA_WORKERS", 8).to_i
# Set the directory
directory discourse_path
# Bind to the specified address and port
bind ENV.fetch("PUMA_BIND", "tcp://#{ENV['PUMA_BIND_ALL'] ? '' : '127.0.0.1:'}
# PID file location
FileUtils.mkdir_p("#{discourse_path}/tmp/pids")
pidfile ENV.fetch("PUMA_PID_PATH", "#{discourse_path}/tmp/pids/puma.pid")
# State file - used by pumactl
state_path "#{discourse_path}/tmp/pids/puma.state"
# Environment-specific configuration
if ENV["RAILS_ENV"] == "production"
# Production timeout
worker_timeout 30
else
# Development timeout
worker_timeout ENV.fetch("PUMA_TIMEOUT", 60).to_i
end
# Preload application
preload_app!
# Handle worker boot and shutdown
before_fork do
Discourse.preload_rails!
Discourse.before_fork
# Supervisor check
supervisor_pid = ENV["PUMA_SUPERVISOR_PID"].to_i
if supervisor_pid > 0
Thread.new do
loop do
unless File.exist?("/proc/#{supervisor_pid}")
puts "Kill self supervisor is gone"
Process.kill "TERM", Process.pid
end
sleep 2
end
end
end
# Sidekiq workers
sidekiqs = ENV["PUMA_SIDEKIQS"].to_i
if sidekiqs > 0
puts "starting #{sidekiqs} supervised sidekiqs"
require "demon/sidekiq"
Demon::Sidekiq.after_fork { DiscourseEvent.trigger(:sidekiq_fork_started) }
Demon::Sidekiq.start(sidekiqs)
if Discourse.enable_sidekiq_logging?
Signal.trap("USR1") do
# Delay Sidekiq log reopening
sleep 1
Demon::Sidekiq.kill("USR2")
end
end
end
# Email sync demon
if ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
puts "starting up EmailSync demon"
Demon::EmailSync.start(1)
end
# Plugin demons
DiscoursePluginRegistry.demon_processes.each do |demon_class|
puts "starting #{demon_class.prefix} demon"
demon_class.start(1)
end
# Demon monitoring thread
Thread.new do
loop do
begin
sleep 60
if sidekiqs > 0
Demon::Sidekiq.ensure_running
Demon::Sidekiq.heartbeat_check
Demon::Sidekiq.rss_memory_check
end
if ENV["DISCOURSE_ENABLE_EMAIL_SYNC_DEMON"] == "true"
Demon::EmailSync.ensure_running
Demon::EmailSync.check_email_sync_heartbeat
end
DiscoursePluginRegistry.demon_processes.each(&:ensure_running)
rescue => e
Rails.logger.warn("Error in demon processes heartbeat check: #{e}\n#{e.backtrace.join("\n")}")
end
end
end
# Close Redis connection
Discourse.redis.close
end
on_worker_boot do
DiscourseEvent.trigger(:web_fork_started)
Discourse.after_fork
end
# Worker timeout handling
worker_timeout 30
# Low-level worker options
threads 8, 32
To run Discourse, run puma -C config/puma.rb
Using systemd
, you can run it on boot and restart it on failure. Here is the service file:
/etc/systemd/system/discourse.service
[Unit]
Description=Discourse with Puma Server
After=network.target postgresql.service
Requires=postgresql.service
[Service]
Type=simple
User=discourse
Group=discourse
WorkingDirectory=/var/www/discourse
# requires running `rvm 3.3.7 --default` before this service is run
ExecStart=/usr/bin/zsh -lc 'source /home/discourse/.zshrc && /home/discourse/.rvm/gems/ruby-3.3.7/bin/puma -C config/puma.rb'
ExecReload=/usr/bin/zsh -lc 'source /home/discourse/.zshrc && /home/discourse/.rvm/gems/ruby-3.3.7/bin/pumactl restart'
# Restart configuration
Restart=always
RestartSec=5s
# Basic security measures
NoNewPrivileges=true
ProtectSystem=full
ProtectHome=read-only
[Install]
WantedBy=multi-user.target
Now the puma server listens on 127.0.0.1:3000
. Adapt the nginx config file from Docker:
/etc/nginx/sites-enabled/discourse.conf
# Additional MIME types that you'd like nginx to handle go in here
types {
text/csv csv;
#application/wasm wasm;
}
upstream discourse { server 127.0.0.1:3000; }
# inactive means we keep stuff around for 1440m minutes regardless of last access (1 week)
# levels means it is a 2 deep hierarchy cause we can have lots of files
# max_size limits the size of the cache
proxy_cache_path /var/nginx/cache inactive=1440m levels=1:2 keys_zone=one:10m max_size=600m;
# Increased from the default value to acommodate large cookies during oAuth2 flows
# like in https://meta.discourse.org/t/x/74060 and large CSP and Link (preload) headers
proxy_buffer_size 32k;
proxy_buffers 4 32k;
# Increased from the default value to allow for a large volume of cookies in request headers
# Discourse itself tries to minimise cookie size, but we cannot control other cookies set by other tools on the same domain.
large_client_header_buffers 4 32k;
# attempt to preserve the proto, must be in http context
map $http_x_forwarded_proto $thescheme {
default $scheme;
"~https$" https;
}
log_format log_discourse '[$time_local] "$http_host" $remote_addr "$request" "$http_user_agent" "$sent_http_x_discourse_route" $status $bytes_sent "$http_referer" $upstream_response_time $request_time "$upstream_http_x_discourse_username" "$upstream_http_x_discourse_trackview" "$upstream_http_x_queue_time" "$upstream_http_x_redis_calls" "$upstream_http_x_redis_time" "$upstream_http_x_sql_calls" "$upstream_http_x_sql_time"';
# Allow bypass cache from localhost
#geo $bypass_cache {
# default 0;
# 127.0.0.1 1;
# ::1 1;
#}
limit_req_zone $binary_remote_addr zone=flood:10m rate=12r/s;
limit_req_zone $binary_remote_addr zone=bot:10m rate=200r/m;
limit_req_status 429;
limit_conn_zone $binary_remote_addr zone=connperip:10m;
limit_conn_status 429;
server {
access_log /var/log/nginx/access.log log_discourse;
#listen unix:/var/nginx/nginx.http.sock;
listen 443 ssl;
listen [::]:443 ssl;
server_name example.com;
ssl_certificate /etc/nginx/ssl/example.com.cer;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
set_real_ip_from unix:;
set_real_ip_from 127.0.0.1/32;
set_real_ip_from ::1/128;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
gzip on;
gzip_vary on;
gzip_min_length 1000;
gzip_comp_level 5;
gzip_types application/json text/css text/javascript application/x-javascript application/javascript image/svg+xml application/wasm;
gzip_proxied any;
# Uncomment and configure this section for HTTPS support
# NOTE: Put your ssl cert in your main nginx config directory (/etc/nginx)
#
# rewrite ^/(.*) https://enter.your.web.hostname.here/$1 permanent;
#
# listen 443 ssl;
# ssl_certificate your-hostname-cert.pem;
# ssl_certificate_key your-hostname-cert.key;
# ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers HIGH:!aNULL:!MD5;
#
server_tokens off;
sendfile on;
keepalive_timeout 65;
# maximum file upload size (keep up to date when changing the corresponding site setting)
client_max_body_size 128m ;
# path to discourse's public directory
set $public /var/www/discourse/public;
# without weak etags we get zero benefit from etags on dynamically compressed content
# further more etags are based on the file in nginx not sha of data
# use dates, it solves the problem fine even cross server
etag off;
# prevent direct download of backups
location ^~ /backups/ {
internal;
}
# bypass rails stack with a cheap 204 for favicon.ico requests
location /favicon.ico {
return 204;
access_log off;
log_not_found off;
}
location / {
root $public;
add_header ETag "";
# auth_basic on;
# auth_basic_user_file /etc/nginx/htpasswd;
location ~ ^/uploads/short-url/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_pass http://discourse;
break;
}
location ~ ^/(secure-media-uploads/|secure-uploads)/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_pass http://discourse;
break;
}
location ~* (fonts|assets|plugins|uploads)/.*\.(eot|ttf|woff|woff2|ico|otf)$ {
expires 1y;
add_header Cache-Control public,immutable;
add_header Access-Control-Allow-Origin *;
}
location = /srv/status {
access_log off;
log_not_found off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_pass http://discourse;
break;
}
# some minimal caching here so we don't keep asking
# longer term we should increase probably to 1y
location ~ ^/javascripts/ {
expires 1d;
add_header Cache-Control public,immutable;
add_header Access-Control-Allow-Origin *;
}
location ~ ^/assets/(?<asset_path>.+)$ {
expires 1y;
# asset pipeline enables this
brotli_static on;
gzip_static on;
add_header Cache-Control public,immutable;
# HOOK in asset location (used for extensibility)
# TODO I don't think this break is needed, it just breaks out of rewrite
break;
}
location ~ ^/plugins/ {
expires 1y;
add_header Cache-Control public,immutable;
add_header Access-Control-Allow-Origin *;
}
# cache emojis
location ~ /images/emoji/ {
expires 1y;
add_header Cache-Control public,immutable;
add_header Access-Control-Allow-Origin *;
}
location ~ ^/uploads/ {
# NOTE: it is really annoying that we can't just define headers
# at the top level and inherit.
#
# proxy_set_header DOES NOT inherit, by design, we must repeat it,
# otherwise headers are not set correctly
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping $public/=/downloads/;
expires 1y;
add_header Cache-Control public,immutable;
## optional upload anti-hotlinking rules
#valid_referers none blocked mysite.com *.mysite.com;
#if ($invalid_referer) { return 403; }
# custom CSS
location ~ /stylesheet-cache/ {
add_header Access-Control-Allow-Origin *;
try_files $uri =404;
}
# this allows us to bypass rails
location ~* \.(gif|png|jpg|jpeg|bmp|tif|tiff|ico|webp|avif)$ {
add_header Access-Control-Allow-Origin *;
try_files $uri =404;
}
# SVG needs an extra header attached
location ~* \.(svg)$ {
}
# thumbnails & optimized images
location ~ /_?optimized/ {
add_header Access-Control-Allow-Origin *;
try_files $uri =404;
}
proxy_pass http://discourse;
break;
}
location ~ ^/admin/backups/ {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_set_header X-Sendfile-Type X-Accel-Redirect;
proxy_set_header X-Accel-Mapping $public/=/downloads/;
proxy_pass http://discourse;
break;
}
# This big block is needed so we can selectively enable
# acceleration for backups, avatars, sprites and so on.
# see note about repetition above
location ~ ^/(svg-sprite/|letter_avatar/|letter_avatar_proxy/|user_avatar|highlight-js|stylesheets|theme-javascripts|favicon/proxied|service-worker) {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
# if Set-Cookie is in the response nothing gets cached
# this is double bad cause we are not passing last modified in
proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";
proxy_hide_header "X-Discourse-Username";
proxy_hide_header "X-Runtime";
# note x-accel-redirect can not be used with proxy_cache
proxy_cache one;
proxy_cache_key "$scheme,$host,$request_uri";
proxy_cache_valid 200 301 302 7d;
#proxy_cache_bypass $bypass_cache;
proxy_pass http://discourse;
break;
}
# we need buffering off for message bus
location /message-bus/ {
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_http_version 1.1;
proxy_buffering off;
proxy_pass http://discourse;
break;
}
# this means every file in public is tried first
try_files $uri @discourse;
}
location /downloads/ {
internal;
alias $public/;
}
location @discourse {
limit_conn connperip 20;
limit_req zone=flood burst=12 nodelay;
limit_req zone=bot burst=100 nodelay;
proxy_set_header Host $http_host;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $thescheme;
proxy_pass http://discourse;
}
}
Now your Discourse can be accessed from example.com:443
.
maintenance
To access rails console, simply run rails c
as user discourse
in /var/www/discourse
. The discourse
cmd which can be found in official doc is basically bundle exec script/discourse
.
To upgrade Discourse, consult #upgrade-cmd, then restart puma using either puma restart
or puma phased-restart
. For the difference, consult puma/docs/restart.md at master · puma/puma · GitHub .