Failed Rebuild of Poorly Maintained Server w/ Ownership Issues - Looking for Help

Hello, and thanks for reading! Our community lost its primary technical administrator, and because of that they have created a single point of failure for us in a number of ways. Recently, one of the admins noticed that transactional emails were no longer being delivered, and I am the only active person remaining with any systems administration experience.

However, one member had ownership of our cloud server and was responsible for payments so we were able to gain root access to our Discourse server.

We originally thought that we could make a local backup, but S3 is configured so the local backups steps wouldn’t work and our last local backup is 2019.

While backups on S3 are as recent as last week we do not have access to the S3 bucket. Our remaining admin supposedly gets the emails, but whether he can download them without authentication to S3 is an open question.

At this point, we decided we could either attempt a rebuild, or reconfigure the mail services to a new SendGrid account - we were already using SendGrid but didn’t know the info.

I decided to attempt a rebuild since, for whatever reason in my mind, it seemed like a more reliable option for potentially resolving errors and was inevitably needed.

It failed with the following output:

==================== REBUILD LOG ====================
x86_64 arch detected.
WARNING: containers/app.yml file is world-readable. You can secure this file by running: chmod o-rwx containers/app.yml
Ensuring launcher is up to date
Fetching origin
Launcher has diverged source, this is only expected in Dev mode
Stopping old container
+ /usr/bin/docker stop -t 60 app
app
2.0.20230313-1023: Pulling from discourse/base
Digest: sha256:f7467469ab9e39c3548d4478e3f416c05b34a0ee58eb6e40b963e562005669cc
Status: Image is up to date for discourse/base:2.0.20230313-1023
docker.io/discourse/base:2.0.20230313-1023
/usr/local/lib/ruby/gems/3.2.0/gems/pups-1.1.1/lib/pups.rb
/usr/local/bin/pups --stdin
I, [2025-03-23T00:18:18.600612 #1]  INFO -- : Reading from stdin
I, [2025-03-23T00:18:18.607987 #1]  INFO -- : > locale-gen $LANG && update-locale
I, [2025-03-23T00:18:18.693415 #1]  INFO -- : Generating locales (this might take a while)...
Generation complete.

I, [2025-03-23T00:18:18.693711 #1]  INFO -- : > mkdir -p /shared/postgres_run
I, [2025-03-23T00:18:18.699738 #1]  INFO -- :
I, [2025-03-23T00:18:18.700585 #1]  INFO -- : > chown postgres:postgres /shared/postgres_run
I, [2025-03-23T00:18:18.705669 #1]  INFO -- :
I, [2025-03-23T00:18:18.706036 #1]  INFO -- : > chmod 775 /shared/postgres_run
I, [2025-03-23T00:18:18.710603 #1]  INFO -- :
I, [2025-03-23T00:18:18.710840 #1]  INFO -- : > rm -fr /var/run/postgresql
I, [2025-03-23T00:18:18.715934 #1]  INFO -- :
I, [2025-03-23T00:18:18.716265 #1]  INFO -- : > ln -s /shared/postgres_run /var/run/postgresql
I, [2025-03-23T00:18:18.720901 #1]  INFO -- :
I, [2025-03-23T00:18:18.721141 #1]  INFO -- : > socat /dev/null UNIX-CONNECT:/shared/postgres_run/.s.PGSQL.5432 || exit 0 && echo postgres already running stop container ; exit 1
2025/03/23 00:18:18 socat[19] E connect(6, AF=1 "/shared/postgres_run/.s.PGSQL.5432", 36): No such file or directory
I, [2025-03-23T00:18:18.735107 #1]  INFO -- :
I, [2025-03-23T00:18:18.735305 #1]  INFO -- : > rm -fr /shared/postgres_run/.s*
I, [2025-03-23T00:18:18.741065 #1]  INFO -- :
I, [2025-03-23T00:18:18.741225 #1]  INFO -- : > rm -fr /shared/postgres_run/*.pid
I, [2025-03-23T00:18:18.747157 #1]  INFO -- :
I, [2025-03-23T00:18:18.747321 #1]  INFO -- : > mkdir -p /shared/postgres_run/13-main.pg_stat_tmp
I, [2025-03-23T00:18:18.752360 #1]  INFO -- :
I, [2025-03-23T00:18:18.752671 #1]  INFO -- : > chown postgres:postgres /shared/postgres_run/13-main.pg_stat_tmp
I, [2025-03-23T00:18:18.758084 #1]  INFO -- :
I, [2025-03-23T00:18:18.768877 #1]  INFO -- : File > /etc/service/postgres/run  chmod: +x  chown:
I, [2025-03-23T00:18:18.778907 #1]  INFO -- : File > /etc/service/postgres/log/run  chmod: +x  chown:
I, [2025-03-23T00:18:18.788505 #1]  INFO -- : File > /etc/runit/3.d/99-postgres  chmod: +x  chown:
I, [2025-03-23T00:18:18.799277 #1]  INFO -- : File > /root/upgrade_postgres  chmod: +x  chown:
I, [2025-03-23T00:18:18.799808 #1]  INFO -- : > chown -R root /var/lib/postgresql/13/main
I, [2025-03-23T00:18:19.007579 #1]  INFO -- :
I, [2025-03-23T00:18:19.007806 #1]  INFO -- : > [ ! -e /shared/postgres_data ] && install -d -m 0755 -o postgres -g postgres /shared/postgres_data && sudo -E -u postgres /usr/lib/postgresql/13/bin/initdb -D /shared/postgres_data || exit 0
I, [2025-03-23T00:18:19.010768 #1]  INFO -- :
I, [2025-03-23T00:18:19.010931 #1]  INFO -- : > chown -R postgres:postgres /shared/postgres_data
I, [2025-03-23T00:18:19.047929 #1]  INFO -- :
I, [2025-03-23T00:18:19.048161 #1]  INFO -- : > chown -R postgres:postgres /var/run/postgresql
I, [2025-03-23T00:18:19.051531 #1]  INFO -- :
I, [2025-03-23T00:18:19.051974 #1]  INFO -- : > /root/upgrade_postgres
I, [2025-03-23T00:18:19.062513 #1]  INFO -- :
I, [2025-03-23T00:18:19.062718 #1]  INFO -- : > rm /root/upgrade_postgres
I, [2025-03-23T00:18:19.065696 #1]  INFO -- :
I, [2025-03-23T00:18:19.066378 #1]  INFO -- : Replacing data_directory = '/var/lib/postgresql/13/main' with data_directory = '/shared/postgres_data' in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.067338 #1]  INFO -- : Replacing (?-mix:#?listen_addresses *=.*) with listen_addresses = '*' in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.067801 #1]  INFO -- : Replacing (?-mix:#?synchronous_commit *=.*) with synchronous_commit = $db_synchronous_commit in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.068343 #1]  INFO -- : Replacing (?-mix:#?shared_buffers *=.*) with shared_buffers = $db_shared_buffers in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.068760 #1]  INFO -- : Replacing (?-mix:#?work_mem *=.*) with work_mem = $db_work_mem in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.069202 #1]  INFO -- : Replacing (?-mix:#?default_text_search_config *=.*) with default_text_search_config = '$db_default_text_search_config' in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.069589 #1]  INFO -- : > install -d -m 0755 -o postgres -g postgres /shared/postgres_backup
I, [2025-03-23T00:18:19.075219 #1]  INFO -- :
I, [2025-03-23T00:18:19.075772 #1]  INFO -- : Replacing (?-mix:#?checkpoint_segments *=.*) with checkpoint_segments = $db_checkpoint_segments in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.076190 #1]  INFO -- : Replacing (?-mix:#?logging_collector *=.*) with logging_collector = $db_logging_collector in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.076722 #1]  INFO -- : Replacing (?-mix:#?log_min_duration_statement *=.*) with log_min_duration_statement = $db_log_min_duration_statement in /etc/postgresql/13/main/postgresql.conf
I, [2025-03-23T00:18:19.077185 #1]  INFO -- : Replacing (?-mix:^#local +replication +postgres +peer$) with local replication postgres  peer in /etc/postgresql/13/main/pg_hba.conf
I, [2025-03-23T00:18:19.077661 #1]  INFO -- : Replacing (?-mix:^host.*all.*all.*127.*$) with host all all 0.0.0.0/0 md5 in /etc/postgresql/13/main/pg_hba.conf
I, [2025-03-23T00:18:19.078027 #1]  INFO -- : Replacing (?-mix:^host.*all.*all.*::1\/128.*$) with host all all ::/0 md5 in /etc/postgresql/13/main/pg_hba.conf
I, [2025-03-23T00:18:19.078404 #1]  INFO -- : > HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main
I, [2025-03-23T00:18:19.080855 #1]  INFO -- : > sleep 5
2025-03-23 00:18:19.198 UTC [42] LOG:  starting PostgreSQL 13.10 (Debian 13.10-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit
2025-03-23 00:18:19.199 UTC [42] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2025-03-23 00:18:19.199 UTC [42] LOG:  listening on IPv6 address "::", port 5432
2025-03-23 00:18:19.205 UTC [42] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2025-03-23 00:18:19.214 UTC [45] LOG:  database system was shut down at 2025-03-23 00:03:12 UTC
2025-03-23 00:18:19.229 UTC [42] LOG:  database system is ready to accept connections
I, [2025-03-23T00:18:24.084187 #1]  INFO -- :
I, [2025-03-23T00:18:24.084488 #1]  INFO -- : > su postgres -c 'createdb discourse' || true
2025-03-23 00:18:24.204 UTC [55] postgres@postgres ERROR:  database "discourse" already exists
2025-03-23 00:18:24.204 UTC [55] postgres@postgres STATEMENT:  CREATE DATABASE discourse;
createdb: error: database creation failed: ERROR:  database "discourse" already exists
I, [2025-03-23T00:18:24.207833 #1]  INFO -- :
I, [2025-03-23T00:18:24.208363 #1]  INFO -- : > su postgres -c 'psql discourse -c "create user discourse;"' || true
2025-03-23 00:18:24.305 UTC [59] postgres@discourse ERROR:  role "discourse" already exists
2025-03-23 00:18:24.305 UTC [59] postgres@discourse STATEMENT:  create user discourse;
ERROR:  role "discourse" already exists
I, [2025-03-23T00:18:24.309053 #1]  INFO -- :
I, [2025-03-23T00:18:24.309640 #1]  INFO -- : > su postgres -c 'psql discourse -c "grant all privileges on database discourse to discourse;"' || true
I, [2025-03-23T00:18:24.419882 #1]  INFO -- : GRANT

I, [2025-03-23T00:18:24.420493 #1]  INFO -- : > su postgres -c 'psql discourse -c "alter schema public owner to discourse;"'
I, [2025-03-23T00:18:24.517946 #1]  INFO -- : ALTER SCHEMA

I, [2025-03-23T00:18:24.518418 #1]  INFO -- : > su postgres -c 'psql template1 -c "create extension if not exists hstore;"'
NOTICE:  extension "hstore" already exists, skipping
I, [2025-03-23T00:18:24.625671 #1]  INFO -- : CREATE EXTENSION

I, [2025-03-23T00:18:24.626326 #1]  INFO -- : > su postgres -c 'psql template1 -c "create extension if not exists pg_trgm;"'
NOTICE:  extension "pg_trgm" already exists, skipping
I, [2025-03-23T00:18:24.725233 #1]  INFO -- : CREATE EXTENSION

I, [2025-03-23T00:18:24.725801 #1]  INFO -- : > su postgres -c 'psql discourse -c "create extension if not exists hstore;"'
NOTICE:  extension "hstore" already exists, skipping
I, [2025-03-23T00:18:24.827529 #1]  INFO -- : CREATE EXTENSION

I, [2025-03-23T00:18:24.828107 #1]  INFO -- : > su postgres -c 'psql discourse -c "create extension if not exists pg_trgm;"'
NOTICE:  extension "pg_trgm" already exists, skipping
I, [2025-03-23T00:18:24.931702 #1]  INFO -- : CREATE EXTENSION

I, [2025-03-23T00:18:24.932258 #1]  INFO -- : > sudo -u postgres psql discourse
I, [2025-03-23T00:18:24.935282 #1]  INFO -- : update pg_database set encoding = pg_char_to_encoding('UTF8') where datname = 'discourse' AND encoding = pg_char_to_encoding('SQL_ASCII');

I, [2025-03-23T00:18:25.031195 #1]  INFO -- : File > /var/lib/postgresql/take-database-backup  chmod: +x  chown: postgres:postgres
I, [2025-03-23T00:18:25.037342 #1]  INFO -- : File > /var/spool/cron/crontabs/postgres  chmod:   chown:
I, [2025-03-23T00:18:25.037745 #1]  INFO -- : > echo postgres installed!
I, [2025-03-23T00:18:25.042262 #1]  INFO -- : postgres installed!

I, [2025-03-23T00:18:25.052240 #1]  INFO -- : File > /etc/service/redis/run  chmod: +x  chown:
I, [2025-03-23T00:18:25.061161 #1]  INFO -- : File > /etc/service/redis/log/run  chmod: +x  chown:
I, [2025-03-23T00:18:25.070080 #1]  INFO -- : File > /etc/runit/3.d/10-redis  chmod: +x  chown:
I, [2025-03-23T00:18:25.070956 #1]  INFO -- : Replacing daemonize yes with  in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.072697 #1]  INFO -- : Replacing (?-mix:^pidfile.*$) with  in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.073799 #1]  INFO -- : > install -d -m 0755 -o redis -g redis /shared/redis_data
I, [2025-03-23T00:18:25.077931 #1]  INFO -- :
I, [2025-03-23T00:18:25.078752 #1]  INFO -- : Replacing (?-mix:^logfile.*$) with logfile "" in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.080205 #1]  INFO -- : Replacing (?-mix:^bind .*$) with  in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.081472 #1]  INFO -- : Replacing (?-mix:^dir .*$) with dir /shared/redis_data in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.082868 #1]  INFO -- : Replacing (?-mix:^protected-mode yes) with protected-mode no in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.084108 #1]  INFO -- : Replacing # io-threads 4 with io-threads $redis_io_threads in /etc/redis/redis.conf
I, [2025-03-23T00:18:25.085201 #1]  INFO -- : > echo redis installed
I, [2025-03-23T00:18:25.088466 #1]  INFO -- : redis installed

I, [2025-03-23T00:18:25.088953 #1]  INFO -- : > cat /etc/redis/redis.conf | grep logfile
I, [2025-03-23T00:18:25.095957 #1]  INFO -- : logfile ""

I, [2025-03-23T00:18:25.096489 #1]  INFO -- : > exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf
I, [2025-03-23T00:18:25.099538 #1]  INFO -- : > sleep 10
103:C 23 Mar 2025 00:18:25.116 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
103:C 23 Mar 2025 00:18:25.116 # Redis version=7.0.7, bits=64, commit=00000000, modified=0, pid=103, just started
103:C 23 Mar 2025 00:18:25.116 # Configuration loaded
103:M 23 Mar 2025 00:18:25.118 * monotonic clock: POSIX clock_gettime
103:M 23 Mar 2025 00:18:25.120 * Running mode=standalone, port=6379.
103:M 23 Mar 2025 00:18:25.120 # Server initialized
103:M 23 Mar 2025 00:18:25.120 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
103:M 23 Mar 2025 00:18:25.121 * Loading RDB produced by version 7.0.7
103:M 23 Mar 2025 00:18:25.121 * RDB age 913 seconds
103:M 23 Mar 2025 00:18:25.121 * RDB memory usage when created 29.75 Mb
103:M 23 Mar 2025 00:18:25.266 * Done loading RDB, keys loaded: 18090, keys expired: 3.
103:M 23 Mar 2025 00:18:25.266 * DB loaded from disk: 0.145 seconds
103:M 23 Mar 2025 00:18:25.266 * Ready to accept connections
I, [2025-03-23T00:18:35.105146 #1]  INFO -- :
I, [2025-03-23T00:18:35.107388 #1]  INFO -- : > thpoff echo "thpoff is installed!"
I, [2025-03-23T00:18:35.117140 #1]  INFO -- : thpoff is installed!

I, [2025-03-23T00:18:35.118070 #1]  INFO -- : > /usr/local/bin/ruby -e 'if ENV["DISCOURSE_SMTP_ADDRESS"] == "smtp.example.com"; puts "Aborting! Mail is not configured!"; exit 1; end'
I, [2025-03-23T00:18:35.260647 #1]  INFO -- :
I, [2025-03-23T00:18:35.261530 #1]  INFO -- : > /usr/local/bin/ruby -e 'if ENV["DISCOURSE_HOSTNAME"] == "discourse.example.com"; puts "Aborting! Domain is not configured!"; exit 1; end'
I, [2025-03-23T00:18:35.379994 #1]  INFO -- :
I, [2025-03-23T00:18:35.380922 #1]  INFO -- : > /usr/local/bin/ruby -e 'if (ENV["DISCOURSE_CDN_URL"] || "")[0..1] == "//"; puts "Aborting! CDN must have a protocol specified. Once fixed you should rebake your posts now to correct all posts."; exit 1; end'
I, [2025-03-23T00:18:35.520434 #1]  INFO -- :
I, [2025-03-23T00:18:35.521804 #1]  INFO -- : > rm -f /etc/cron.d/anacron
I, [2025-03-23T00:18:35.527278 #1]  INFO -- :
I, [2025-03-23T00:18:35.533681 #1]  INFO -- : File > /etc/cron.d/anacron  chmod:   chown:
I, [2025-03-23T00:18:35.544400 #1]  INFO -- : File > /etc/runit/1.d/copy-env  chmod: +x  chown:
I, [2025-03-23T00:18:35.555450 #1]  INFO -- : File > /etc/service/unicorn/run  chmod: +x  chown:
I, [2025-03-23T00:18:35.565315 #1]  INFO -- : File > /etc/service/nginx/run  chmod: +x  chown:
I, [2025-03-23T00:18:35.575445 #1]  INFO -- : File > /etc/runit/3.d/01-nginx  chmod: +x  chown:
I, [2025-03-23T00:18:35.586497 #1]  INFO -- : File > /etc/runit/3.d/02-unicorn  chmod: +x  chown:
I, [2025-03-23T00:18:35.586705 #1]  INFO -- : Replacing # postgres with sv start postgres || exit 1 in /etc/service/unicorn/run
I, [2025-03-23T00:18:35.587163 #1]  INFO -- : > exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf
I, [2025-03-23T00:18:35.590588 #1]  INFO -- : > cd /var/www/discourse && sudo -H -E -u discourse git reset --hard
130:C 23 Mar 2025 00:18:35.612 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
130:C 23 Mar 2025 00:18:35.612 # Redis version=7.0.7, bits=64, commit=00000000, modified=0, pid=130, just started
130:C 23 Mar 2025 00:18:35.612 # Configuration loaded
130:M 23 Mar 2025 00:18:35.613 * monotonic clock: POSIX clock_gettime
130:M 23 Mar 2025 00:18:35.614 # Warning: Could not create server TCP listening socket *:6379: bind: Address already in use
130:M 23 Mar 2025 00:18:35.614 # Failed listening on port 6379 (TCP), aborting.
Updating files: 100% (32972/32972), done.
I, [2025-03-23T00:18:40.370921 #1]  INFO -- : HEAD is now at 59e548540 Build(deps): Bump sass from 1.58.3 to 1.59.2 in /app/assets/javascripts (#20656)

I, [2025-03-23T00:18:40.371398 #1]  INFO -- : > cd /var/www/discourse && sudo -H -E -u discourse git clean -f
I, [2025-03-23T00:18:40.710584 #1]  INFO -- :
I, [2025-03-23T00:18:40.711030 #1]  INFO -- : > cd /var/www/discourse && sudo -H -E -u discourse bash -c '
  if [ $(git rev-parse --is-shallow-repository) == "true" ]; then
      git remote set-branches --add origin main
      git remote set-branches origin tests-passed
      git fetch --depth 1 origin tests-passed
  else
      git fetch --prune --prune-tags origin tests-passed
  fi
'
From https://github.com/discourse/discourse
 * branch                tests-passed -> FETCH_HEAD
   05e713d09..e7c3abb94  tests-passed -> origin/tests-passed
I, [2025-03-23T00:18:42.586103 #1]  INFO -- :
I, [2025-03-23T00:18:42.586534 #1]  INFO -- : > cd /var/www/discourse && sudo -H -E -u discourse bash -c '
  if [[ $(git symbolic-ref --short HEAD) == tests-passed ]] ; then
      git pull
  else
      git -c advice.detachedHead=false checkout tests-passed
  fi
'
Switched to a new branch 'tests-passed'
I, [2025-03-23T00:18:51.833256 #1]  INFO -- : Branch 'tests-passed' set up to track remote branch 'tests-passed' from 'origin'.

I, [2025-03-23T00:18:51.834334 #1]  INFO -- : > cd /var/www/discourse && mkdir -p tmp
I, [2025-03-23T00:18:51.841544 #1]  INFO -- :
I, [2025-03-23T00:18:51.841855 #1]  INFO -- : > cd /var/www/discourse && chown discourse:www-data tmp
I, [2025-03-23T00:18:51.847601 #1]  INFO -- :
I, [2025-03-23T00:18:51.847953 #1]  INFO -- : > cd /var/www/discourse && mkdir -p tmp/pids
I, [2025-03-23T00:18:51.855859 #1]  INFO -- :
I, [2025-03-23T00:18:51.856222 #1]  INFO -- : > cd /var/www/discourse && mkdir -p tmp/sockets
I, [2025-03-23T00:18:51.863615 #1]  INFO -- :
I, [2025-03-23T00:18:51.863977 #1]  INFO -- : > cd /var/www/discourse && touch tmp/.gitkeep
I, [2025-03-23T00:18:51.869796 #1]  INFO -- :
I, [2025-03-23T00:18:51.870182 #1]  INFO -- : > cd /var/www/discourse && mkdir -p                    /shared/log/rails
I, [2025-03-23T00:18:51.876106 #1]  INFO -- :
I, [2025-03-23T00:18:51.876454 #1]  INFO -- : > cd /var/www/discourse && bash -c "touch -a           /shared/log/rails/{production,production_errors,unicorn.stdout,unicorn.stderr,sidekiq}.log"
I, [2025-03-23T00:18:51.888118 #1]  INFO -- :
I, [2025-03-23T00:18:51.888454 #1]  INFO -- : > cd /var/www/discourse && bash -c "ln    -s           /shared/log/rails/{production,production_errors,unicorn.stdout,unicorn.stderr,sidekiq}.log /var/www/discourse/log"
I, [2025-03-23T00:18:51.897590 #1]  INFO -- :
I, [2025-03-23T00:18:51.898001 #1]  INFO -- : > cd /var/www/discourse && bash -c "mkdir -p           /shared/{uploads,backups}"
I, [2025-03-23T00:18:51.906190 #1]  INFO -- :
I, [2025-03-23T00:18:51.906512 #1]  INFO -- : > cd /var/www/discourse && bash -c "ln    -s           /shared/{uploads,backups} /var/www/discourse/public"
I, [2025-03-23T00:18:51.917159 #1]  INFO -- :
I, [2025-03-23T00:18:51.917467 #1]  INFO -- : > cd /var/www/discourse && bash -c "mkdir -p           /shared/tmp/{backups,restores}"
I, [2025-03-23T00:18:51.927203 #1]  INFO -- :
I, [2025-03-23T00:18:51.927487 #1]  INFO -- : > cd /var/www/discourse && bash -c "ln    -s           /shared/tmp/{backups,restores} /var/www/discourse/tmp"
I, [2025-03-23T00:18:51.937966 #1]  INFO -- :
I, [2025-03-23T00:18:51.938298 #1]  INFO -- : > cd /var/www/discourse && chown -R discourse:www-data /shared/log/rails /shared/uploads /shared/backups /shared/tmp
I, [2025-03-23T00:18:52.001123 #1]  INFO -- :
I, [2025-03-23T00:18:52.001476 #1]  INFO -- : > cd /var/www/discourse && [ ! -d public/plugins ] || find public/plugins/ -maxdepth 1 -xtype l -delete
I, [2025-03-23T00:18:52.010734 #1]  INFO -- :
I, [2025-03-23T00:18:52.011660 #1]  INFO -- : Replacing # redis with sv start redis || exit 1 in /etc/service/unicorn/run
I, [2025-03-23T00:18:52.013337 #1]  INFO -- : > cd /var/www/discourse/plugins && mkdir -p plugins
I, [2025-03-23T00:18:52.019369 #1]  INFO -- :
I, [2025-03-23T00:18:52.019704 #1]  INFO -- : > cd /var/www/discourse/plugins && git clone https://github.com/discourse/docker_manager.git
Cloning into 'docker_manager'...
I, [2025-03-23T00:18:53.224801 #1]  INFO -- :
I, [2025-03-23T00:18:53.225328 #1]  INFO -- : > cd /var/www/discourse/plugins && git clone https://github.com/discourse/discourse-spoiler-alert.git
Cloning into 'discourse-spoiler-alert'...
I, [2025-03-23T00:18:53.893263 #1]  INFO -- :
I, [2025-03-23T00:18:53.893765 #1]  INFO -- : > cd /var/www/discourse/plugins && git clone https://github.com/discourse/discourse-data-explorer.git
Cloning into 'discourse-data-explorer'...
I, [2025-03-23T00:18:54.647629 #1]  INFO -- :
I, [2025-03-23T00:18:54.647998 #1]  INFO -- : > cd /var/www/discourse/plugins && git clone https://github.com/merefield/discourse-onebox-assistant.git
Cloning into 'discourse-onebox-assistant'...
I, [2025-03-23T00:18:55.121580 #1]  INFO -- :
I, [2025-03-23T00:18:55.122655 #1]  INFO -- : > cp /var/www/discourse/config/nginx.sample.conf /etc/nginx/conf.d/discourse.conf
I, [2025-03-23T00:18:55.127568 #1]  INFO -- :
I, [2025-03-23T00:18:55.128317 #1]  INFO -- : > rm /etc/nginx/sites-enabled/default
I, [2025-03-23T00:18:55.133169 #1]  INFO -- :
I, [2025-03-23T00:18:55.133494 #1]  INFO -- : > mkdir -p /var/nginx/cache
I, [2025-03-23T00:18:55.137201 #1]  INFO -- :
I, [2025-03-23T00:18:55.137985 #1]  INFO -- : Replacing pid /run/nginx.pid; with daemon off; in /etc/nginx/nginx.conf
I, [2025-03-23T00:18:55.139546 #1]  INFO -- : Replacing (?m-ix:upstream[^\}]+\}) with upstream discourse { server 127.0.0.1:3000; } in /etc/nginx/conf.d/discourse.conf
I, [2025-03-23T00:18:55.140371 #1]  INFO -- : Replacing (?-mix:server_name.+$) with server_name _ ; in /etc/nginx/conf.d/discourse.conf
I, [2025-03-23T00:18:55.141165 #1]  INFO -- : Replacing (?-mix:client_max_body_size.+$) with client_max_body_size $upload_size ; in /etc/nginx/conf.d/discourse.conf
I, [2025-03-23T00:18:55.142058 #1]  INFO -- : Replacing (?-mix:worker_connections.+$) with worker_connections $nginx_worker_connections ; in /etc/nginx/nginx.conf
I, [2025-03-23T00:18:55.142716 #1]  INFO -- : > echo "done configuring web"
I, [2025-03-23T00:18:55.145799 #1]  INFO -- : done configuring web

I, [2025-03-23T00:18:55.146504 #1]  INFO -- : > cd /var/www/discourse && gem install bundler --conservative -v $(awk '/BUNDLED WITH/ { getline; gsub(/ /,""); print $0 }' Gemfile.lock)
I, [2025-03-23T00:18:56.661918 #1]  INFO -- : Successfully installed bundler-2.6.4
1 gem installed

I, [2025-03-23T00:18:56.662443 #1]  INFO -- : > cd /var/www/discourse && find /var/www/discourse ! -user discourse -exec chown discourse {} \+
I, [2025-03-23T00:18:58.289649 #1]  INFO -- :
I, [2025-03-23T00:18:58.290115 #1]  INFO -- : > cd /var/www/discourse && su discourse -c 'yarn install --production --frozen-lockfile && yarn cache clean'
error discourse@: The engine "node" is incompatible with this module. Expected version ">= 20". Got "18.15.0"
error discourse@: The engine "yarn" is incompatible with this module. Expected version "please-use-pnpm". Got "1.22.19"
warning discourse@: The engine "pnpm" appears to be invalid.
error Found incompatible module.
I, [2025-03-23T00:18:58.802017 #1]  INFO -- : yarn install v1.22.19
info No lockfile found.
[1/5] Validating package.json...
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.

I, [2025-03-23T00:18:58.803434 #1]  INFO -- : Terminating async processes
I, [2025-03-23T00:18:58.803753 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 42
I, [2025-03-23T00:18:58.804011 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 103
103:signal-handler (1742689138) Received SIGTERM scheduling shutdown...
2025-03-23 00:18:58.804 UTC [42] LOG:  received fast shutdown request
103:M 23 Mar 2025 00:18:58.806 # User requested shutdown...
103:M 23 Mar 2025 00:18:58.806 * Saving the final RDB snapshot before exiting.
2025-03-23 00:18:58.863 UTC [42] LOG:  aborting any active transactions
2025-03-23 00:18:58.868 UTC [42] LOG:  background worker "logical replication launcher" (PID 51) exited with exit code 1
2025-03-23 00:18:58.871 UTC [46] LOG:  shutting down
2025-03-23 00:18:58.960 UTC [42] LOG:  database system is shut down
103:M 23 Mar 2025 00:18:59.184 * DB saved on disk
103:M 23 Mar 2025 00:18:59.184 # Redis is now ready to exit, bye bye...

I am assuming that these errors are the primary reason?

error discourse@: The engine "node" is incompatible with this module. Expected version ">= 20". Got "18.15.0"
error discourse@: The engine "yarn" is incompatible with this module. Expected version "please-use-pnpm". Got "1.22.19"
warning discourse@: The engine "pnpm" appears to be invalid.

I then attempted to update these modules using npm. I installed npm on the discourse server, and tried to upgrade yarn, but needed node as a dependency. I tried to upgrade node, and received an error that a particular file required administrative access during the install, and I needed to run a chown command to change privileges. I did that, but it made no difference.

That’s ultimately where we stopped.

Here’s my ask:

  1. If we do get this yarn / node thing situation updated, will that resolve the rebuild error? How do we do that?

  2. Is there any way I can compel the server to make a local backup now, outside of S3? If I can do that, we may just abandon ship and restore to a new Discourse hosted server.

  3. Are there paid discourse services that could help us? My time is almost non-existent and I want our community to be saved even if it costs us a bit.

Lastly, the server is running Ubuntu 20.04. Additionally, we have this as our plugin list -

==================== PLUGINS ====================
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-spoiler-alert.git
          - git clone https://github.com/discourse/discourse-data-explorer.git
          - git clone https://github.com/merefield/discourse-onebox-assistant.git

WARNING:
You have what appear to be non-official plugins.
If you are having trouble, you should disable them and try rebuilding again.

See https://github.com/discourse/discourse/blob/main/lib/plugin/metadata.rb for the official list.

But I am presuming these have nothing to do with the failed rebuild.

Thank you for your help.

If backups are going there, you do have access to it. And yes, so does whoever owns that bucket.

Feel free to open a topic on Marketplace if you wish to explore third party paid services.

4 Likes

That’s a tricky position to be in - you have my sympathy.

Personally, I would very much want to get a backup before trying a rebuild. If the regular backup process isn’t useful to you (because it sends backups somewhere inaccessible) then I’d try somehow to get a database backup using the command line, but I’m not sure exactly how. Maybe pg_dump inside docker?

Or maybe you can use your command line access to redirect backups to the local disk instead of S3.

But in both cases you will need sufficient local disk space.

Edit: crossed in the post with Jay.

5 Likes

Thank you - my general idea if we could do a backup would be to indeed redirect the backup to a local disk instead of S3 and we would have space for it.

It’s my fault that I didn’t figure out the local backup last night rather than do the rebuild - hindsight is 20/20 on that. I underestimated the impact of a rebuild.

Can you help me understand this? Are you saying that credentials must be present if backups are being routed there? (we did confirm a new one showed up in the admin panel of our Discourse)

Trouble is, if we need the credentials to access the backup I don’t believe anyone actually has those except the fella that ghosted.

Would literatecomputing be able to get a local backup and restore our existing site to a new maintained server if they were to take on the work?

If the S3 bucket has backups as recent as last week, then Discourse has credentials to the bucket. They are probably in app.yml or in the site settings.

But you don’t need access to the S3 bucket yourself, you should be able to download the backup through Discourse.

Do you see the backups in /admin/backups ?
If so, what happens when you try to download them?

You could also change Site Settings - Backups - Backup location to “local storage”.

5 Likes

Yes. If you are backing up to S3 then the credentials are in your database or in the yml file.

Yes. In either the SiteSettings or the yml file is the backup_location setting. If it’s in the SiteSettings and not the database, then it’s harder , but not impossible to change.

1 Like

I’m just a noob but could

be from recent topic reported ownership issue with a file on rebuild

if you have command line access why not do a command line back up?

I have root access. I did do a command line back up, but it pushed to S3. Thanks to @pfaffman’s comments I’m now realizing I can try pulling the backup from S3 down to local - I just need the time to attempt it.

Do you see the setting backup_location in the settings in the UX (or the server is down so you can’t see?)

2 Likes

Do you mean this warning?

WARNING: containers/app.yml file is world-readable. You can secure this file by running: chmod o-rwx containers/app.yml

It’s a warning. For many years, the default was to have that file be world-readable (assuming that most self-hosters will just log in as root and have no other users), but at some point it was decided that having the secrets in that file readable to everyone isn’t a best practice. Since you’re running launcher as root, root will always be able to read the file.

1 Like

I do not see an admin/backups where is that? The only place I have seen backups is /var/discourse/shared/standalone/backups/default, but these are all ancient local backups.

I will have follow up on the site settings situation later when the person who can access that is awake (they are on UK time). I’m presuming they have no access since the site is down.

I am not seeing a specific backup_location setting in the app.yml file.

Also, sidebar but I saw in your company’s About that you’re a former CS teacher. That’s my day job presently :smiley:

Not that one. When I have a chance I’ll post the specific error, but it was from trying to upgrade node, as I said, not during the rebuild.

2 Likes

Add this to your forums URL. You should be able to see your backups in the user interface.

Ah, got it. I was looking for this on the server itself. The site is completely down so I don’t have access to the page.

1 Like

Just a general question. What are the specs on the server? Including OS version.

this is an ancient [1] image and probably the source of these errors:

You probably need to git pull the discourse_docker directory (the directory from where you run launcher).

As usual, take a backup of the server first since you’re in a degraded state.


  1. in Internet time ↩︎

1 Like

We were able to get a local new backup sorted out today so I’m downloading that locally now.

So I’m just gonna pull at /var/discourse, and then try a rebuild to update?

Your branch and 'origin/main' have diverged,
and have 15 and 201 different commits each, respectively.
  (use "git pull" to merge the remote branch into yours)

diverged is, indeed, an understatement :smile:

1 Like

I’m guessing you’re been committing your container configurations, so git pull or git pull --rebase might get you where you need to be, may as well try it :+1:

My friend I have no clue what has been happening, but I will see what we get with a pull or a rebase if necessary. I’m going to create a new maintenance window since, for some reason, the site did come back up in the old version. I’ll update you all with the results.

I really appreciate all the wisdom!

1 Like