Connection error to external Redis

Hello Community,

We have a Discourse setup in Kubernetes. We are running a deployment with the Discourse container image, using external Redis and external PostgreSQL.
To manage Redis HA, we deployed 3 nodes within a StatefulSet managed with Sentinel. Moreover, we added a deployment of HAProxy between Discourse and Redis with the following setup:

global
      log stdout format raw local0
      maxconn 102400
      maxsessrate 500
      maxconnrate 2000
      maxsslrate 2000

    resolvers mydns
      parse-resolv-conf
      hold valid 5s
      accepted_payload_size 8192

    defaults
      log global
      mode tcp
      timeout client 2h
      timeout connect 360s
      timeout server 2h
      default-server init-addr none
      retries 3

    frontend stats
        bind *:1024
        mode http
        http-request use-service prometheus-exporter if { path /metrics }
        stats enable
        stats uri /stats
        stats refresh 10s

    listen redis_master
      bind *:6379
      mode tcp

      option tcp-check
      tcp-check connect
      tcp-check send AUTH\ XXXXXXXXXXX\r\n
      tcp-check expect string +OK
      tcp-check send PING\r\n
      tcp-check expect string +PONG
      tcp-check send info\ replication\r\n
      tcp-check expect string role:master
      tcp-check send QUIT\r\n
      tcp-check expect string +OK
      server redis_node_0 discoursev2-redis-node-0.discoursev2-redis-headless.discoursev2.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1
      server redis_node_1 discoursev2-redis-node-1.discoursev2-redis-headless.discoursev2.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1
      server redis_node_2 discoursev2-redis-node-2.discoursev2-redis-headless.discoursev2.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1

    listen redis_slave
      bind *:6380
      mode tcp
      option tcp-check
      tcp-check connect
      tcp-check send AUTH\ cUvNwwd4ujTbqshTbYjeTTrX8tDlKAcYvvAUOGuk\r\n
      tcp-check expect string +OK
      tcp-check send PING\r\n
      tcp-check expect string +PONG
      tcp-check send QUIT\r\n
      tcp-check expect string +OK
      server redis_node_0 discoursev2-redis-node-0.discoursev2-redis-headless.discoursev2.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1
      server redis_node_1 discoursev2-redis-node-1.discoursev2-redis-headless.discoursev2.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1
      server redis_node_2 discoursev2-redis-node-2.discoursev2-redis-headless.discoursev2.svc.cluster.local:6379 resolvers mydns check inter 1s fall 1 rise 1

Our problem is that every time we do a rollout on the Discourse deployment, the Unicorn threads are unable to connect to Redis. They retry the connection, and after some time, all of them finally connect. Here are some errors we see in the logs:

/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:684:in `init_worker_process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:547:in `spawn_missing_workers'
config/unicorn.conf.rb:299:in `block in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:721:in `worker_loop'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:295:in `reconnect'
/var/www/discourse/lib/discourse.rb:906:in `after_fork'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/bin/unicorn:128:in `<top (required)>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
E, [2024-07-24T15:42:22.524194 #531] ERROR -- : Error connecting to Redis on discoursev2-haproxy.discoursev2.svc.cluster.local:6379 (Socket::ResolutionError) (Redis::CannotConnectError)
/var/www/discourse/lib/discourse_redis.rb:217:in `reconnect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:143:in `start'
I, [2024-07-24T15:42:23.499254 #422]  INFO -- : Loading Sidekiq in process id 422
2024-07-24 15:42:23 +0000: Starting Prometheus global reporter pid: 439
Booting Sidekiq 6.5.12 with Sidekiq::RedisConnection::RedisAdapter options {:host=>"discoursev2-haproxy.discoursev2.svc.cluster.local", :port=>6379, :password=>"REDACTED", :namespace=>"sidekiq"}
E, [2024-07-24T15:42:23.530765 #810] ERROR -- : Error connecting to Redis on discoursev2-haproxy.discoursev2.svc.cluster.local:6379 (Socket::ResolutionError) (Redis::CannotConnectError)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:684:in `init_worker_process'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:547:in `spawn_missing_workers'
/var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:295:in `reconnect'
config/unicorn.conf.rb:299:in `block in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
/var/www/discourse/lib/discourse.rb:906:in `after_fork'
/var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/bin/unicorn:128:in `<top (required)>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:721:in `worker_loop'
/var/www/discourse/lib/discourse_redis.rb:217:in `reconnect'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:143:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:143:in `start'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
config/unicorn.conf.rb:299:in `block in reload'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:295:in `reconnect'
/var/www/discourse/lib/discourse_redis.rb:217:in `reconnect'
E, [2024-07-24T15:42:24.540440 #887] ERROR -- : Error connecting to Redis on discoursev2-haproxy.discoursev2.svc.cluster.local:6379 (Socket::ResolutionError) (Redis::CannotConnectError)

We also tested removing HAProxy and pointing directly to the Redis master node, and we observed the same problem. Additionally, we disabled the Istio gateway between Discourse and Redis. In both scenarios, after some time, all Unicorn threads can establish the connection to Redis.

Can anybody give us some guidance on this problem? Do we need any special setup on Redis? This is our current setup:

appendonly yes
appendfsync no
aof-rewrite-incremental-fsync yes
save ""

Finally, we would like to know:

  1. What is the recommended way to have Redis in HA mode? As far as we know, Redis cluster is not supported.
  2. In case of using a single Redis node setup, what will be the consequence of losing Redis for a few seconds? Will Discourse repopulate the content if we go to a stateless Redis where every time it is restarted, it loses all the information?

Many thanks in advance

Hello Community,

I wanted to follow up on my previous post regarding our Discourse setup in Kubernetes, where we are facing connectivity issues between Unicorn threads and Redis during deployment rollouts. Despite trying various configurations, including removing HAProxy and disabling the Istio gateway, the issue persists.

We would greatly appreciate any guidance or recommendations to resolve this problem. Specifically:

  1. Do we need any special configuration for Redis to support our setup?
  2. What is the recommended approach for ensuring high availability with Redis, considering that Redis Cluster is not supported?
  3. If we opt for a single Redis node setup, what would be the impact of a brief Redis outage? Will Discourse repopulate the data in a stateless Redis environment where all data is lost upon restart?

Your insights would be invaluable as we continue troubleshooting this issue.

Thank you in advance for your support.