I had some issues with the upgrade - with the first forum failing on the first attempt (via the dashboard) then failed again via a rebuild, but seemed to had worked on the second rebuild attempt, although I then had to rebuild an additional time. That reminded me that I needed to stop all Discourse instances when I did the upgrade with the PG12 update (there are three Discourse forums on this server with individual containers) and thus the following worked for the other two forums:
However for some reason the first forum is no longer accessible, with Safari saying the server unexpectedly dropped the connection. Doing a rebuild seems to go fine, but itās not accessible and I can enter the app and Rails console and the database appears in tact.
Only warnings I can see with the rebuild that might be relevant:
168:M 31 Jan 2021 21:39:22.459 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
168:M 31 Jan 2021 21:39:22.459 # Server initialized
168:M 31 Jan 2021 21:39:22.459 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
168:M 31 Jan 2021 21:39:22.459 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
168:M 31 Jan 2021 21:39:22.459 * Loading RDB produced by version 6.0.9
168:M 31 Jan 2021 21:39:22.459 * RDB age 21 seconds
168:M 31 Jan 2021 21:39:22.459 * RDB memory usage when created 4.03 Mb
168:M 31 Jan 2021 21:39:22.466 * DB loaded from disk: 0.006 seconds
168:M 31 Jan 2021 21:39:22.466 * Ready to accept connections
production.log:
Job exception: Error connecting to Redis on localhost:6379 (Errno::ENETUNREACH)
Error connecting to Redis on localhost:6379 (Errno::ENETUNREACH) subscribe failed, reconnecting in 1 second. Call stack /var/www/discourse/vendor/bundle/ruby/2.7.0/gems/redis-4.2.5/lib/redis/client.rb:367:in `rescue in establish_connection'
Similar messages appear in unicorn.stderr.log
and unicorn.stdout.log
.
Entering the container and redis-cli ping
I get a PONG back. Redis is running on the server (but not in individual containers - tho this has always been the case afaik).
Any ideas what might be going on?
(Iāve also rebooted the server, and created a new letsencrypt cert for this domain to be on the safe side - but still the same.)