Losing connection with Redis


(Matt McNeil) #1

After a week or more of running smoothly with the latest stable Discourse (v1.4.3) on a standard web/data Docker setup, I’ve had two mysterious Discourse web app crashes in the last 18 hours. Rebooting the DO server solved the issue each time, but it seems like it’s recurring every 9 hours or so.
From what I can tell, it’s due to Redis becoming unresponsive, though there’s nothing I can tell from the Redis logs.
When I tried to hit the Discourse Rails app in a browser, I received a server error page. When I ssh’d into the Digital Ocean server, here’s what I saw in top:

 1338 1000      20   0  482036 239464   9980 R  83.7 11.7   1:48.19 ruby
 1111 root      20   0  143812  11268   2904 S  12.3  0.5   2:11.47 docker
 1283 landsca+  20   0   42748   7996   1244 S   4.6  0.4   1:50.01 redis-server

Here’s a snippet from the syslog around that time and Redis seems to be normal, then disappears:

Dec  4 23:00:47 discourse-data redis[1020]: RDB: 2 MB of memory used by copy-on-write
Dec  4 23:00:47 discourse-data redis[24]: Background saving terminated with success
Dec  4 23:02:24 discourse-data redis[24]: DB saved on disk
Dec  4 23:02:26 discourse-data redis[24]: DB saved on disk
Dec  4 23:07:27 discourse-data redis[24]: 10 changes in 300 seconds. Saving...
Dec  4 23:07:27 discourse-data redis[24]: Background saving started by pid 1030
Dec  4 23:07:27 discourse-data redis[1030]: DB saved on disk
Dec  4 23:07:27 discourse-data redis[1030]: RDB: 2 MB of memory used by copy-on-write
Dec  4 23:07:27 discourse-data redis[24]: Background saving terminated with success

Here’s what it looks like from the Unicorn logs:

E, [2015-12-04T23:19:08.559532 #44] ERROR -- : master loop error: NOAUTH Authentication required. (Redis::CommandError)
E, [2015-12-04T23:19:08.559794 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.1/lib/redis/client.rb:113:in `call'
E, [2015-12-04T23:19:08.559854 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/instrumentation/redis.rb:42:in `block in call'
E, [2015-12-04T23:19:08.559901 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/datastores.rb:111:in `block in wrap'
E, [2015-12-04T23:19:08.559957 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/method_tracer.rb:73:in `block in trace_execution_scoped'
E, [2015-12-04T23:19:08.560004 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/method_tracer_helpers.rb:82:in `trace_execution_scoped'
E, [2015-12-04T23:19:08.560048 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/method_tracer.rb:71:in `trace_execution_scoped'
E, [2015-12-04T23:19:08.560092 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/datastores.rb:108:in `wrap'
E, [2015-12-04T23:19:08.560136 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/instrumentation/redis.rb:41:in `call'
E, [2015-12-04T23:19:08.560189 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.1/lib/redis.rb:789:in `block in get'
E, [2015-12-04T23:19:08.560236 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.1/lib/redis.rb:37:in `block in synchronize'
E, [2015-12-04T23:19:08.560280 #44] ERROR -- : /usr/local/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
E, [2015-12-04T23:19:08.560324 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.1/lib/redis.rb:37:in `synchronize'
E, [2015-12-04T23:19:08.560367 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.1/lib/redis.rb:788:in `get'
E, [2015-12-04T23:19:08.560450 #44] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:59:in `block (3 levels) in <class:DiscourseRedis>'
E, [2015-12-04T23:19:08.560499 #44] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:27:in `ignore_readonly'
E, [2015-12-04T23:19:08.560542 #44] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:59:in `block (2 levels) in <class:DiscourseRedis>'
E, [2015-12-04T23:19:08.560643 #44] ERROR -- : /var/www/discourse/app/jobs/regular/run_heartbeat.rb:13:in `last_heartbeat'
E, [2015-12-04T23:19:08.560695 #44] ERROR -- : config/unicorn.conf.rb:119:in `check_sidekiq_heartbeat'
E, [2015-12-04T23:19:08.560811 #44] ERROR -- : config/unicorn.conf.rb:146:in `master_sleep'
E, [2015-12-04T23:19:08.560915 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:295:in `join'
E, [2015-12-04T23:19:08.560965 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/bin/unicorn:126:in `<top (required)>'
E, [2015-12-04T23:19:08.561010 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
E, [2015-12-04T23:19:08.561123 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
E, [2015-12-04T23:19:08.561816 #44] ERROR -- : master loop error: NOAUTH Authentication required. (Redis::CommandError)
E, [2015-12-04T23:19:08.561887 #44] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.1/lib/redis/client.rb:113:in `call'
E, [2015-12-04T23:19:08.561936 #44] ERROR -- : /var/www/discourse/plugins/new_relic-discourse/gems/2.0.0/gems/newrelic_rpm-3.13.1.300/lib/new_relic/agent/instrumentation/redis.rb:42:in `block in call'

Any ideas on how to troubleshoot? Thanks!!


(Jeff Atwood) #2

I would remove the New Relic plugin there to start. And any other plugins we do not ship.


(Matt McNeil) #3

Thanks, Jeff.

I ended up just doing the following, which fixed it for about 3 days:

cd /var/discourse && git pull && ./launcher bootstrap web && ./launcher destroy web && ./launcher start web && ./launcher cleanup

However, just this morning, the failure happened again, and this time I was able to find a clear issue in the Redis logs when it suddenly (after days of running fine) started getting permission denied errors:

Dec  8 15:39:14 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  8 15:39:14 discourse-data redis[21]: Background saving started by pid 14810
Dec  8 15:39:14 discourse-data redis[14810]: DB saved on disk
Dec  8 15:39:14 discourse-data redis[14810]: RDB: 0 MB of memory used by copy-on-write
Dec  8 15:39:14 discourse-data redis[21]: Background saving terminated with success
Dec  8 15:41:07 discourse-data redis[21]: Failed opening .rdb for saving: Permission denied
Dec  8 15:41:07 discourse-data redis[21]: Failed opening .rdb for saving: Permission denied
Dec  8 15:44:15 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  8 15:44:15 discourse-data redis[21]: Background saving started by pid 14822
Dec  8 15:44:15 discourse-data redis[14822]: Failed opening .rdb for saving: Permission denied
Dec  8 15:44:15 discourse-data redis[21]: Background saving error
Dec  8 15:44:21 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  8 15:44:21 discourse-data redis[21]: Background saving started by pid 14823

A simple server reboot seemed to fix things (i.e. it brought Discourse back up).

I then checked the permissions of the .rdb file and they seem fine:

root@discourse-data:/shared# ls -l
drwxr-xr-x  2 redis    redis    4096 Dec  8 17:10 redis_data

root@discourse-data:/shared/redis_data# ls -l
-rw-r--r-- 1 redis redis 631618 Dec  8 17:00 redis.rdb

I notice that Discourse-docker is running an older version of Redis and did see anecdotal mention of a Redis permission error from a similar version that was fixed by upgrading: Redis: Failed opening .rdb for saving: Permission denied - Stack Overflow

root@discourse-data:/shared/redis_data# redis-cli
127.0.0.1:6379> help
redis-cli 2.8.13

@sam: Any thoughts on this issue?


(Allen - Watchman Monitoring) #4

reminds me of the problems we had here

https://meta.discourse.org/t/host-disk-filling-up-with-thin-1-log-and-production-log-w-redis-errors/36353?source_topic_id=36343

(Matt McNeil) #5

@codinghorror @sam I removed the new_relic plugin and rebuilt the docker container, but this time it only made it for 3 hours before Redis stopped being able to do background saves, crashing Discourse:

Dec  8 15:34:13 discourse-data redis[21]: Background saving started by pid 14799
Dec  8 15:34:13 discourse-data redis[14799]: DB saved on disk
Dec  8 15:34:13 discourse-data redis[14799]: RDB: 0 MB of memory used by copy-on-write
Dec  8 15:34:13 discourse-data redis[21]: Background saving terminated with success
Dec  8 15:39:14 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  8 15:39:14 discourse-data redis[21]: Background saving started by pid 14810
Dec  8 15:39:14 discourse-data redis[14810]: DB saved on disk
Dec  8 15:39:14 discourse-data redis[14810]: RDB: 0 MB of memory used by copy-on-write
Dec  8 15:39:14 discourse-data redis[21]: Background saving terminated with success
Dec  8 15:41:07 discourse-data redis[21]: Failed opening .rdb for saving: Permission denied
Dec  8 15:41:07 discourse-data redis[21]: Failed opening .rdb for saving: Permission denied
Dec  8 15:44:15 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  8 15:44:15 discourse-data redis[21]: Background saving started by pid 14822
Dec  8 15:44:15 discourse-data redis[14822]: Failed opening .rdb for saving: Permission denied
Dec  8 15:44:15 discourse-data redis[21]: Background saving error

Any idea what could be happening in this span of 2 minutes which could lead to a permission denied error? And/or why rebooting the server immediately fixes it (temporarily)?

Dec  8 15:39:14 discourse-data redis[21]: Background saving terminated with success
Dec  8 15:41:07 discourse-data redis[21]: Failed opening .rdb for saving: Permission denied

(Sam Saffron) #6

Have you looked in the folder with the RDB ? What are the permissions looking like? Do you have some sort of process or cron job that goes about changing permissions?


(Matt McNeil) #7

Thanks @sam. The permissions of the folder and file are copied above. And I’m using the standard discourse Docker image and have not added any cron jobs.

(sent from a phone)


(Sam Saffron) #8

This is totally strange … our latest image is redis 3.0.5

Have you tried:

cd /var/discourse
git pull
./launcher rebuild app

(Matt McNeil) #9

The compound command I’ve been using to update the image is copied near the top of the thread and I thought was the better one to use (instead of rebuild) with a data/web container setup in order to minimize downtime. Perhaps that’s the root of my issues?

(sent from a phone)


(Sam Saffron) #10

If you have a data container you are going to have to rebuild it as well, maybe that is the root, regardless we ship a much more recent redis so something is amiss.


(Matt McNeil) #11

Sure, totally makes sense. How best then to rebuild the data container while preserving all data and minimizing downtime?

Thanks so much for your help!

(sent from a phone)


(Sam Saffron) #12

rebuild never throws away data :slight_smile: we don’t store any important data in containers at any point in time you can nuke containers and rebuild.


(Gergely Munkácsy) #13

I think this problem is not connected to Discourse. We have an app with multiple container: Node.js, Redis, Postgres and a few days ago we got the same error in multiple server. We use AWS: one instance in Europe, one in Singapore.
The logs are the same as here: Redis container was clean, nowhere an error, or anything unusual.
I already checked the available disk space, and the memory usage, but we have enough of both of it.

Do you have any idea what might cause this error?


(Sam Saffron) #14

We run a lot of redises in containers here and do not see this error, my guess is that it is version specific


(Matt McNeil) #15

Oy, ok, I rebuilt the data container, which upgraded to Redis 3.0.5, and now the situation is even worse. It’s now crashing after 5-10 minutes. For example, here’s what’s happening just after restarting:

Dec  9 14:57:49 discourse-data redis[21]: Server started, Redis version 3.0.5
Dec  9 14:57:49 discourse-data redis[21]: WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Dec  9 14:57:49 discourse-data redis[21]: WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
Dec  9 14:57:49 discourse-data redis[21]: DB loaded from disk: 0.003 seconds
Dec  9 14:57:49 discourse-data redis[21]: The server is now ready to accept connections on port 6379
Dec  9 15:02:50 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  9 15:02:50 discourse-data redis[21]: Background saving started by pid 100
Dec  9 15:02:50 discourse-data redis[100]: DB saved on disk
Dec  9 15:02:50 discourse-data redis[100]: RDB: 2 MB of memory used by copy-on-write
Dec  9 15:02:50 discourse-data redis[21]: Background saving terminated with success
Dec  9 15:02:52 discourse-data redis[21]: DB saved on disk
Dec  9 15:06:34 discourse-data redis[21]: Failed opening .rdb for saving: Permission denied
Dec  9 15:07:53 discourse-data redis[21]: 10 changes in 300 seconds. Saving...
Dec  9 15:07:53 discourse-data redis[211]: Failed opening .rdb for saving: Permission denied
Dec  9 15:07:53 discourse-data redis[21]: Background saving started by pid 211
Dec  9 15:07:53 discourse-data redis[21]: Background saving error

As the situation is now urgent, I’ve seen that setting this option might be a temporary fix: config set stop-writes-on-bgsave-error no (eg Temporarily fixed using - config set stop-writes-on-bgsave-error no · Issue #2146 · antirez/redis · GitHub) @sam do you see any downside to this? Also, what do you think about applying the fixes mentioned in the Redis warnings above?

Dec  9 14:57:49 discourse-data redis[21]: WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Dec  9 14:57:49 discourse-data redis[21]: WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

(Matt McNeil) #16

I made those two changes above (vm.overcommit_memory = 1 and disabling transparent huge pages) and that seems to have helped so far – 3 hours without another reboot-requiring crash – so perhaps this was a memory-related issue.


(Matt McNeil) #17

Oy. If anyone is still following this, despite those memory-related fixes I made in the previous post, after 24 hours Redis became unavailable again, bringing down Discourse.
One thing making me question the memory-related cause is that when logging into the server after the crash but before rebooting, it appeared that there is still memory free (and lots of swap available):

top - 09:47:53 up 22:54,  1 user,  load average: 0.07, 0.06, 0.05
Tasks: 124 total,   2 running, 122 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.0 us,  0.7 sy,  0.0 ni, 98.2 id,  0.0 wa,  0.2 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1943768 used,   106220 free,   414532 buffers
KiB Swap:  2097148 total,        8 used,  2097140 free.   348392 cached Mem

Here is the relevant chunk from the Redis logs.

Dec 10 14:24:33 discourse-data redis[19]: Background saving terminated with success
Dec 10 14:29:34 discourse-data redis[19]: 10 changes in 300 seconds. Saving...
Dec 10 14:29:34 discourse-data redis[19]: Background saving started by pid 5870
Dec 10 14:29:34 discourse-data redis[5870]: DB saved on disk
Dec 10 14:29:34 discourse-data redis[5870]: RDB: 0 MB of memory used by copy-on-write
Dec 10 14:29:34 discourse-data redis[19]: Background saving terminated with success
Dec 10 14:33:46 discourse-data redis[19]: DB saved on disk
Dec 10 14:33:46 discourse-data redis[19]: Failed opening .rdb for saving: Permission denied
Dec 10 14:38:47 discourse-data redis[19]: 10 changes in 300 seconds. Saving...
Dec 10 14:38:47 discourse-data redis[19]: Background saving started by pid 5919
Dec 10 14:38:47 discourse-data redis[5919]: Failed opening .rdb for saving: Permission denied
Dec 10 14:38:47 discourse-data redis[19]: Background saving error

@sam, given Redis’ function within the Discourse system, is it possible/advisable to disable it writing to disk (to prevent these app crashes)? http://redis.io/topics/persistence


(Andrew Lombardi) #18

I’m seeing similar issues and having logs fill up the disk till the app 500’s and will no longer run.

redis is at 3.0.5 – though there’s a version installed on the host box that is 2.8.4, not sure that matters though

$ redis-cli
127.0.0.1:6379> help
redis-cli 3.0.5
Type: "help @<group>" to get a list of commands in <group>
      "help <command>" for help on <command>
      "help <tab>" to get a list of possible help topics
      "quit" to exit
127.0.0.1:6379>

The logs in /var/discourse/shared/standalone/log/rails are completely filling up the disk

-rw-r--r-- 1 kinabalu www-data           0 Dec  9 18:37 production_errors.log
-rw-r--r-- 1 kinabalu www-data      181333 Dec 11 09:37 production.log
-rw-r--r-- 1 kinabalu www-data       21783 Dec 10 22:45 production.log-20151210.gz
-rw-r--r-- 1 kinabalu www-data      278082 Dec 11 02:35 production.log-20151211
-rw-r--r-- 1 kinabalu www-data 18511372529 Dec 11 09:37 unicorn.stderr.log
-rw-r--r-- 1 kinabalu www-data         319 Dec 10 19:48 unicorn.stderr.log-20151210.gz
-rw-r--r-- 1 kinabalu www-data         450 Dec 11 02:35 unicorn.stderr.log-20151211
-rw-r--r-- 1 kinabalu www-data           0 Dec 10 02:35 unicorn.stdout.log
-rw-r--r-- 1 kinabalu www-data          34 Dec  9 18:43 unicorn.stdout.log-20151210

The production.log starts showing these errors which appear to be related to redis:

Failed to process job: NOAUTH Authentication required. ["/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/l
ib/redis/pipeline.rb:79:in `finish'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb
:150:in `block in call_pipeline'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb:28
0:in `with_reconnect'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb:148:in `call_
pipeline'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:2245:in `block in multi'", "/var
/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:57:in `block in synchronize'", "/usr/local/lib/ru
by/2.0.0/monitor.rb:211:in `mon_synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.
rb:57:in `synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:2237:in `multi'", "
/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/redis/reliable_pub_sub.rb:92:in `pub
lish'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus.rb:221:in `publish'", "/var
/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus.rb:421:in `block in new_subscriber_threa
d'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/timer_thread.rb:98:in `call'",
 "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/timer_thread.rb:98:in `do_work'",
"/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/timer_thread.rb:29:in `block in ini
tialize'"]
Job exception: NOAUTH Authentication required.

And unicorn logs fill up with errors which is what obviously is causing diskspace problems:

Failed to report error: NOAUTH Authentication required. 2 Failed to process job: NOAUTH Authentication required. ["/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/pipeline.rb:79:in `finish'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb:150:in `block in call_pipeline'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb:280:in `with_reconnect'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb:148:in `call_pipeline'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:2245:in `block in multi'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:57:in `block in synchronize'", "/usr/local/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:2237:in `multi'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/redis/reliable_pub_sub.rb:92:in `publish'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus.rb:221:in `publish'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus.rb:421:in `block in new_subscriber_thread'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/timer_thread.rb:98:in `call'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/timer_thread.rb:98:in `do_work'", "/var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/timer_thread.rb:29:in `block in initialize'"]
E, [2015-12-11T11:42:29.640449 #126] ERROR -- : app error: NOAUTH Authentication required. (Redis::CommandError)
E, [2015-12-11T11:42:29.641213 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis/client.rb:114:in `call'
E, [2015-12-11T11:42:29.641273 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:1754:in `block in zrangebyscore'
E, [2015-12-11T11:42:29.641315 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:57:in `block in synchronize'
E, [2015-12-11T11:42:29.641355 #126] ERROR -- : /usr/local/lib/ruby/2.0.0/monitor.rb:211:in `mon_synchronize'
E, [2015-12-11T11:42:29.641392 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:57:in `synchronize'
E, [2015-12-11T11:42:29.641431 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/redis-3.2.2/lib/redis.rb:1753:in `zrangebyscore'
E, [2015-12-11T11:42:29.641513 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/redis/reliable_pub_sub.rb:193:in `backlog'
E, [2015-12-11T11:42:29.641566 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus.rb:274:in `backlog'
E, [2015-12-11T11:42:29.641605 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/client.rb:92:in `block in backlog'
E, [2015-12-11T11:42:29.641643 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/client.rb:90:in `each'
E, [2015-12-11T11:42:29.641701 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/client.rb:90:in `backlog'
E, [2015-12-11T11:42:29.641742 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/message_bus-1.1.1/lib/message_bus/rack/middleware.rb:96:in `call'
E, [2015-12-11T11:42:29.641785 #126] ERROR -- : /var/www/discourse/lib/middleware/request_tracker.rb:73:in `call'
E, [2015-12-11T11:42:29.641822 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/engine.rb:518:in `call'
E, [2015-12-11T11:42:29.641897 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/application.rb:165:in `call'
E, [2015-12-11T11:42:29.641951 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/railtie.rb:194:in `public_send'
E, [2015-12-11T11:42:29.641991 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.2.4/lib/rails/railtie.rb:194:in `method_missing'
E, [2015-12-11T11:42:29.642029 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in `block in call'
E, [2015-12-11T11:42:29.642085 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `each'
E, [2015-12-11T11:42:29.642125 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `call'
E, [2015-12-11T11:42:29.642162 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:580:in `process_client'
E, [2015-12-11T11:42:29.642199 #126] ERROR -- : /var/www/discourse/lib/scheduler/defer.rb:85:in `process_client'
E, [2015-12-11T11:42:29.642235 #126] ERROR -- : /var/www/discourse/lib/middleware/unicorn_oobgc.rb:95:in `process_client'
E, [2015-12-11T11:42:29.642272 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:674:in `worker_loop'
E, [2015-12-11T11:42:29.642337 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:529:in `spawn_missing_workers'
E, [2015-12-11T11:42:29.642382 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.9.0/lib/unicorn/http_server.rb:140:in `start'
E, [2015-12-11T11:42:29.642439 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.9.0/bin/unicorn:126:in `<top (required)>'
E, [2015-12-11T11:42:29.642490 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
E, [2015-12-11T11:42:29.642528 #126] ERROR -- : /var/www/discourse/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
Failed to report error: NOAUTH Authentication required. 3 Job exception: NOAUTH Authentication required.

What are the steps I can take to “fix” my install?

I apparently am on the beta track, wouldn’t mind reinstalling a lower version if it would mean more stability. I have v 1.5.0.beta6 installed right now. Also have the following plugins installed:

  • discourse-details 0.3
  • discourse-tagging 0.2
  • docker_manager 0.1
  • lazyYT 1.0.1
  • poll 0.9

(Jeff Atwood) #19

Do you also have separate web and data containers?


(Andrew Lombardi) #20

No.

/var/discourse/shared/standalone/log/rails# docker ps
CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS                                                                                    NAMES
1af71190813b        local_discourse/app   "/sbin/boot"        2 hours ago         Up 2 hours          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:6379->6379/tcp, 0.0.0.0:2222->22/tcp   app