Been having an odd issue since upgrading Discourse.
cd /var/discourse
./launcher enter app
cd /shared/log/rails
tail -f production.log
gives the following ad infinitum:
Job exception: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
unicorn.stderr.log is full of:
E, [2022-03-01T20:45:20.703541 #65] ERROR -- : reaped #<Process::Status: pid 30842 exit 1> worker=unknown
Detected dead worker 30842, restarting...
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 Job exception: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/redis-4.1.3/lib/redis/client.rb:126:in `call'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/redis-4.1.3/lib/redis.rb:538:in `block in del'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/redis-4.1.3/lib/redis.rb:52:in `block in synchronize'
...
Clearly it appears to be a problem with Redis and I suspected that it’s being killed off somehow, but it doesn’t seem to be from the kernel log outside of the docker image, however inside the docker image I see:
$ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 157G 53G 97G 36% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
shm 512M 0 512M 0% /dev/shm
/dev/sda 157G 53G 97G 36% /shared
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
$ free
total used free shared buff/cache available
Mem: 8167420 3248628 794816 8596 4123976 4634504
Swap: 524284 268 524016
Not great, but not horrible it seems.
Back on the host system htop
shows:
Any thoughts on steps to debug/resolve this?