R2 和 Cloudflare 集成,分步迁移 Discourse

This is a post-mortem/runbook of a real migration. I skip common Discourse prep (the official docs cover it). I focus on the exact switches, Cloudflare R2 gotchas, the rails/rake one-liners that mattered, what failed, and how to make the same move low-risk next time.


Target end-state

  • Discourse runs on the new host (Docker, single app container).
  • Uploads + front-end assets live on Cloudflare R2:
    • Bucket discourse-uploads (public)
    • Bucket discourse-backups (private)
  • R2 custom domain: https://files.example.com (created in R2 → Custom domains, not a manual cross-account CNAME).

0) DB backups that actually work (nightly and cutover)

Nightly backups are for disaster recovery. A last-minute backup is for migration cutover. Keep both.

0.1 Policy

  • Nightly: DB-only backup (.sql.gz, no uploads) → verify locallyupload to R2. Keep ≥7 copies (or use R2 lifecycle).
  • Cutover: right before DNS switch, make another DB-only backup and restore that to the new host to minimize content gap.

0.2 Make a DB-only backup and verify

Inside the container:

# Optional but nice: reduce writes while snapshotting
discourse enable_readonly

# Trigger a DB-only backup from Admin UI (uncheck "with uploads")
# or CLI:
discourse backup

# Verify the artifact
ls -lh /var/discourse/shared/standalone/backups/default/
zcat -t /var/discourse/shared/standalone/backups/default/<DB_ONLY>.sql.gz

Deep verify (best): restore to a temporary DB and count rows:

cd /var/discourse && ./launcher enter app
sudo -E -u postgres psql -tc "DROP DATABASE IF EXISTS verifydb;"
sudo -E -u postgres createdb verifydb
zcat /shared/backups/default/<DB_ONLY>.sql.gz | sudo -E -u postgres psql verifydb

sudo -E -u postgres psql -d verifydb -c "select count(*) from topics where deleted_at is null;"
sudo -E -u postgres psql -d verifydb -c "select count(*) from posts  where post_type=1 and deleted_at is null;"

sudo -E -u postgres dropdb verifydb
exit

If the gzip test or the temporary restore fails, do not upload that file to R2—fix and re-backup.

0.3 Push to R2 only after it passes

aws s3 cp /var/discourse/shared/standalone/backups/default/<DB_ONLY>.sql.gz \
  s3://discourse-backups/

0.4 Why sizes differ (1–4 GB is normal)

Both Admin nightly and manual pg_dump produce DB-only .sql.gz. Size differences usually come from included tables and compression, not “missing posts”. If you want to see what’s inside:

# Which tables have data in the dump?
zcat <DB_ONLY>.sql.gz | grep -E '^COPY public\.' | awk '{print $2}' | sort -u | head

# Quick line-count approximation for key tables
zcat <DB_ONLY>.sql.gz | awk '/^COPY public.posts /{c=1;next}/^\\\./{c=0} c' | wc -l
zcat <DB_ONLY>.sql.gz | awk '/^COPY public.topics /{c=1;next}/^\\\./{c=0} c' | wc -l

If those counts match expectations, the backup contains all posts/topics regardless of the file size.


1) Old host: prepare and copy the (verified) DB-only backup

Announce maintenance → enable read-only:

cd /var/discourse && ./launcher enter app
discourse enable_readonly
exit

Copy the verified .sql.gz to the new host:

rsync -avP -e "ssh -o StrictHostKeyChecking=no" \
  root@OLD:/var/discourse/shared/standalone/backups/default/<DB_ONLY>.sql.gz \
  /var/discourse/shared/standalone/backups/default/

If you want an almost-zero content gap, repeat this step right before DNS cutover.


2) New host bootstrap

Install Docker + discourse_docker:

apt-get update && apt-get install -y git curl tzdata
curl -fsSL https://get.docker.com | sh
systemctl enable --now docker

git clone https://github.com/discourse/discourse_docker /var/discourse

Create containers/app.yml with production values. Keep SSL templates commented until DNS points here. Minimum env set:

env:
  DISCOURSE_HOSTNAME: forum.example.com

  # R2 / S3
  DISCOURSE_USE_S3: "true"
  DISCOURSE_S3_REGION: "auto"
  DISCOURSE_S3_ENDPOINT: "https://<ACCOUNT_ID>.r2.cloudflarestorage.com"
  DISCOURSE_S3_FORCE_PATH_STYLE: "true"
  DISCOURSE_S3_BUCKET: "discourse-uploads"
  DISCOURSE_S3_BACKUP_BUCKET: "discourse-backups"
  DISCOURSE_S3_ACCESS_KEY_ID: "<R2_KEY>"
  DISCOURSE_S3_SECRET_ACCESS_KEY: "<R2_SECRET>"
  DISCOURSE_S3_CDN_URL: "https://files.example.com"
  DISCOURSE_BACKUP_LOCATION: "s3"

  # R2 checksum knobs (prevent conflicts)
  AWS_REQUEST_CHECKSUM_CALCULATION: "WHEN_REQUIRED"
  AWS_RESPONSE_CHECKSUM_VALIDATION: "WHEN_REQUIRED"

  # SMTP / Let’s Encrypt email
  DISCOURSE_SMTP_ADDRESS: smtp.gmail.com
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: you@example.com
  DISCOURSE_SMTP_PASSWORD: "<app-password>"
  DISCOURSE_SMTP_DOMAIN: example.com
  DISCOURSE_NOTIFICATION_EMAIL: you@example.com
  LETSENCRYPT_ACCOUNT_EMAIL: you@example.com

Publish assets to R2 during rebuild:

hooks:
  after_assets_precompile:
    - exec:
        cd: $home
        cmd:
          - sudo -E -u discourse bundle exec rake s3:upload_assets
          - sudo -E -u discourse bundle exec rake s3:expire_missing_assets

Bring the container up (HTTP-only for now):

cd /var/discourse && ./launcher rebuild app

3) Restore the DB-only dump (.sql.gz via psql)

cd /var/discourse && ./launcher enter app

sv stop unicorn || true; sv stop sidekiq || true

# ensure a clean DB
sudo -E -u postgres psql -c "REVOKE CONNECT ON DATABASE discourse FROM public;"
sudo -E -u postgres psql -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname='discourse';"
sudo -E -u postgres psql -c "DROP DATABASE IF EXISTS discourse;"
sudo -E -u postgres psql -c "CREATE DATABASE discourse WITH OWNER discourse TEMPLATE template0 ENCODING 'UTF8';"
sudo -E -u postgres psql -d discourse -c "CREATE EXTENSION IF NOT EXISTS citext;"
sudo -E -u postgres psql -d discourse -c "CREATE EXTENSION IF NOT EXISTS hstore;"

# import the dump
zcat /shared/backups/default/<DB_ONLY>.sql.gz | sudo -E -u postgres psql discourse

sv start unicorn
[ -d /etc/service/sidekiq ] && sv start sidekiq || true
exit

If you’re still carrying local uploads pre-R2, you can rsync them once as a safety net; we’ll migrate them to R2 next.


4) R2 knobs that mattered

Buckets & token: create discourse-uploads (public) and discourse-backups (private). Bootstrap with an Account API Token scoped to those two buckets with Admin Read & Write (so PutBucketCors works), then rotate to Object Read & Write after success.

Custom domain: add files.example.com in R2 → Custom domains under the same Cloudflare account as your DNS zone (avoids 1014 cross-account CNAME errors).

CORS on discourse-uploads:

[
  {
    "AllowedOrigins": ["https://forum.example.com","https://files.example.com"],
    "AllowedMethods": ["GET","HEAD"],
    "AllowedHeaders": ["*"],
    "ExposeHeaders": ["*"],
    "MaxAgeSeconds": 86400
  }
]

Rebuild so CSS/JS/fonts publish to R2:

cd /var/discourse && ./launcher rebuild app

5) One-time migration of historical uploads to R2

cd /var/discourse && ./launcher enter app

yes "" | AWS_REQUEST_CHECKSUM_CALCULATION=WHEN_REQUIRED AWS_RESPONSE_CHECKSUM_VALIDATION=WHEN_REQUIRED \
sudo -E -u discourse RAILS_ENV=production bundle exec rake uploads:migrate_to_s3

If you get “X posts not remapped…”, see §7.2 for targeted fixes.


6) Switch production domain

Set in app.yml:

DISCOURSE_HOSTNAME: forum.example.com
LETSENCRYPT_ACCOUNT_EMAIL: you@example.com

DNS: point forum.example.com to the new front (or origin) IP, enable SSL templates, then:

cd /var/discourse && ./launcher rebuild app

Sanity:

curl -I https://forum.example.com
./launcher logs app | tail -n 200

Seeing HTTP/2 403 for anonymous usually means login_required—not an outage.


7) Things that actually broke (and fixes)

7.1 R2 checksum conflict

Aws::S3::Errors::InvalidRequest: You can only specify one non-default checksum at a time.

Fix (keep permanently):

AWS_REQUEST_CHECKSUM_CALCULATION: "WHEN_REQUIRED"
AWS_RESPONSE_CHECKSUM_VALIDATION: "WHEN_REQUIRED"

7.2 “X posts are not remapped to new S3 upload URL”

Reason: some cooked HTML still points at /uploads/<db>/original/....

Targeted rebake:

sudo -E -u discourse RAILS_ENV=production bundle exec rails r '
db = RailsMultisite::ConnectionManagement.current_db
ids = Post.where("cooked LIKE ?", "%/uploads/#{db}/original%").pluck(:id)
ids.each { |pid| Post.find(pid).rebake! }
puts "rebaked=#{ids.size}"
'

Or remap a static prefix then rebake touched posts:

sudo -E -u discourse RAILS_ENV=production bundle exec \
rake "posts:remap[/uploads/default/original,https://files.example.com/original]"

Re-run the migration to confirm clean:

yes "" | AWS_REQUEST_CHECKSUM_CALCULATION=WHEN_REQUIRED AWS_RESPONSE_CHECKSUM_VALIDATION=WHEN_REQUIRED \
sudo -E -u discourse RAILS_ENV=production bundle exec rake uploads:migrate_to_s3

7.3 Tasks “missing”

Always run with bundler + env:

sudo -E -u discourse RAILS_ENV=production bundle exec rake -T s3
sudo -E -u discourse RAILS_ENV=production bundle exec rake -T uploads

Print effective S3 settings:

sudo -E -u discourse RAILS_ENV=production bundle exec rails r \
'puts({ use_s3: ENV["DISCOURSE_USE_S3"], bucket: ENV["DISCOURSE_S3_BUCKET"], endpoint: ENV["DISCOURSE_S3_ENDPOINT"], cdn: ENV["DISCOURSE_S3_CDN_URL"] })'

7.4 s3:upload_assets AccessDenied

Use an Admin RW token for bootstrap (bucket-level CORS ops), then rotate to Object RW.


8) Verification

Inside the container

# URLs now using the CDN
sudo -E -u discourse RAILS_ENV=production bundle exec rails r \
'puts Upload.where("url LIKE ?", "%files.example.com%").limit(5).pluck(:url)'

# Remaining cooked references to local uploads (should trend to 0)
sudo -E -u discourse RAILS_ENV=production bundle exec rails r \
'db=RailsMultisite::ConnectionManagement.current_db; puts Post.where("cooked LIKE ?", "%/uploads/#{db}/original%").count'

Browser

  • Network tab shows assets from files.example.com.
  • Old topics show images under https://files.example.com/original/....

Backups

  • Admin → Backups → create one; confirm a new object appears in discourse-backups on R2.

9) Cleanup

When cooked references are essentially 0:

mv /var/discourse/shared/standalone/uploads /var/discourse/shared/standalone/uploads.bak
mkdir -p /var/discourse/shared/standalone/uploads
chown -R 1000:1000 /var/discourse/shared/standalone/uploads

# after a few stable days
rm -rf /var/discourse/shared/standalone/uploads.bak

Rotate secrets (R2 token → Object RW; SMTP app password if it ever hit logs).


10) Next time (playbook) — R2-first path

  1. Old → New (DB-only): read-only → backup → restore .sql.gz via psql.
  2. Wire R2 before DNS: buckets, token (Admin RW → later Object RW), custom domain, CORS.
  3. env + hooks: checksum flags + s3:upload_assets; rebuild.
  4. DNS cutover to the new host.
  5. Migrate uploads to R2.
  6. Fix stragglers (targeted rebake/remap) → quick re-run of the migration.
  7. Sidekiq finishes background rebakes (or posts:rebake_uncooked_posts).
  8. Backups to R2 verified.
  9. Permissions hardening and secret rotation.
  10. Cleanup local uploads after a cooling-off period.

Appendix A — “verify-before-upload” nightly (pseudo-cron)

LATEST=$(ls -1t /var/discourse/shared/standalone/backups/default/*.sql.gz | head -n1)

# 1) gzip integrity
gzip -t "$LATEST" || exit 1

# 2) temporary-DB row counts
cd /var/discourse && ./launcher enter app <<'EOS'
sudo -E -u postgres psql -tc "DROP DATABASE IF EXISTS verifydb;"
sudo -E -u postgres createdb verifydb
zcat /shared/backups/default/$(basename '"$LATEST"') | sudo -E -u postgres psql verifydb
sudo -E -u postgres psql -d verifydb -c "select count(*) as topics from topics where deleted_at is null;"
sudo -E -u postgres psql -d verifydb -c "select count(*) as posts  from posts  where post_type=1 and deleted_at is null;"
sudo -E -u postgres dropdb verifydb
exit
EOS

# 3) only then upload to R2
aws s3 cp "$LATEST" s3://discourse-backups/

Appendix B — Minimal front proxy (optional)

A tiny reverse proxy VM in front can terminate TLS and forward to the origin over HTTPS. Replace IPs with your own.

Upstream: /etc/nginx/conf.d/upstream.conf

upstream origin_forum {
    server <ORIGIN_IP>:443;
    keepalive 64;
}

Site: /etc/nginx/sites-available/forum.conf

server {
    listen 80;
    listen [::]:80;
    server_name forum.example.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name forum.example.com;

    ssl_certificate     /etc/letsencrypt/live/forum.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/forum.example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_session_timeout 1d;

    client_max_body_size 100m;
    add_header Strict-Transport-Security "max-age=31536000" always;

    location / {
        proxy_pass https://origin_forum;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host forum.example.com;
        proxy_ssl_server_name on;
        proxy_ssl_name forum.example.com;
        # optional verification:
        # proxy_ssl_verify on;
        # proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;

        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP         $remote_addr;

        proxy_buffering off;
        proxy_read_timeout 360s;
        proxy_send_timeout 360s;
        proxy_connect_timeout 60s;

        add_header X-Relay relay-min always;
    }

    location /message-bus/ {
        proxy_pass https://origin_forum;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host forum.example.com;
        proxy_ssl_server_name on;
        proxy_ssl_name forum.example.com;
        proxy_buffering off;
        proxy_read_timeout 3600s;
    }
}
``

Enable & reload:

```bash
ln -sf /etc/nginx/sites-available/forum.conf /etc/nginx/sites-enabled/forum.conf
rm -f /etc/nginx/sites-enabled/default
nginx -t && systemctl reload nginx

Quick check:

curl -I https://forum.example.com   # expect HTTP/2 200/302 and X-Relay header
2 个赞