Problema con la nueva actualización: opciones de depuración

He notado algunos problemas con la actualización reciente. Acabo de actualizar de la versión 3.4.0 beta1 a la última (3.4.0.beta2-dev) y el sitio parece funcionar, pero después de un tiempo (dentro de las 2 horas) deja de responder (recibo un error de host a través de Cloudflare). Todavía tengo el antiguo contenedor Docker en funcionamiento (con la aplicación/base de datos separadas), así que volví a apuntar el proxy al antiguo contenedor Docker, que todavía funciona.

Estoy tratando de averiguar la mejor manera de depurarlo. ¿Hay alguna forma de asignar temporalmente un nombre de host diferente al contenedor Docker para depurar la instancia por separado (sin reescribir la base de datos con las nuevas entradas)?

1 me gusta

¿Hay alguna forma de ver los registros desde la línea de comandos?

2 Me gusta

Sí, echa un vistazo en /var/discourse/shared/standalone/log/rails

2 Me gusta

Parece que hay muchos errores de Redis, aunque estos persisten incluso después de haber revertido al contenedor Docker anterior.

E, [2024-08-29T13:47:47.112726 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `public_send'
E, [2024-08-29T13:47:47.112738 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `block (3 levels) in <class:DiscourseRedis>'
E, [2024-08-29T13:47:47.112748 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:29:in `ignore_readonly'
E, [2024-08-29T13:47:47.112768 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `block (2 levels) in <class:DiscourseRedis>'
E, [2024-08-29T13:47:47.112781 #74] ERROR -- : /var/www/discourse/app/jobs/regular/run_heartbeat.rb:16:in `last_heartbeat'
E, [2024-08-29T13:47:47.112790 #74] ERROR -- : config/unicorn.conf.rb:182:in `check_sidekiq_heartbeat'
E, [2024-08-29T13:47:47.112799 #74] ERROR -- : config/unicorn.conf.rb:269:in `master_sleep'
E, [2024-08-29T13:47:47.112807 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:295:in `join'
E, [2024-08-29T13:47:47.112815 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/bin/unicorn:128:in `<top (required)>'
E, [2024-08-29T13:47:47.112823 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
E, [2024-08-29T13:47:47.112831 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'
E, [2024-08-29T13:47:57.124969 #74] ERROR -- : master loop error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) (Redis::CannotConnectError)
E, [2024-08-29T13:47:57.125051 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
E, [2024-08-29T13:47:57.125063 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
E, [2024-08-29T13:47:57.125072 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
E, [2024-08-29T13:47:57.125080 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
E, [2024-08-29T13:47:57.125088 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
E, [2024-08-29T13:47:57.125109 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:409:in `ensure_connected'
E, [2024-08-29T13:47:57.125118 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'
E, [2024-08-29T13:47:57.125132 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'
E, [2024-08-29T13:47:57.125142 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'
E, [2024-08-29T13:47:57.125150 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:161:in `call'
E, [2024-08-29T13:47:57.125159 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:89:in `block in profile_method'
E, [2024-08-29T13:47:57.125167 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:270:in `block in send_command'
E, [2024-08-29T13:47:57.125181 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:269:in `synchronize'
E, [2024-08-29T13:47:57.125190 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:269:in `send_command'
E, [2024-08-29T13:47:57.125198 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/commands/strings.rb:191:in `get'
E, [2024-08-29T13:47:57.125207 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `public_send'
E, [2024-08-29T13:47:57.125216 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `block (3 levels) in <class:DiscourseRedis>'
E, [2024-08-29T13:47:57.125224 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:29:in `ignore_readonly'
E, [2024-08-29T13:47:57.125232 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `block (2 levels) in <class:DiscourseRedis>'
E, [2024-08-29T13:47:57.125241 #74] ERROR -- : /var/www/discourse/app/jobs/regular/run_heartbeat.rb:16:in `last_heartbeat'
E, [2024-08-29T13:47:57.125264 #74] ERROR -- : config/unicorn.conf.rb:182:in `check_sidekiq_heartbeat'
E, [2024-08-29T13:47:57.125272 #74] ERROR -- : config/unicorn.conf.rb:269:in `master_sleep'
E, [2024-08-29T13:47:57.125283 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:295:in `join'
E, [2024-08-29T13:47:57.125291 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/bin/unicorn:128:in `<top (required)>'
E, [2024-08-29T13:47:57.125299 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
E, [2024-08-29T13:47:57.125308 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'
E, [2024-08-29T13:48:07.136412 #74] ERROR -- : master loop error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) (Redis::CannotConnectError)
E, [2024-08-29T13:48:07.136504 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
E, [2024-08-29T13:48:07.136517 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
E, [2024-08-29T13:48:07.136526 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
E, [2024-08-29T13:48:07.136534 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
E, [2024-08-29T13:48:07.136542 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
E, [2024-08-29T13:48:07.136550 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:409:in `ensure_connected'
E, [2024-08-29T13:48:07.136559 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'
E, [2024-08-29T13:48:07.136567 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'
E, [2024-08-29T13:48:07.136590 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'
E, [2024-08-29T13:48:07.136599 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:161:in `call'
E, [2024-08-29T13:48:07.136607 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:89:in `block in profile_method'
E, [2024-08-29T13:48:07.136615 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:270:in `block in send_command'
E, [2024-08-29T13:48:07.136626 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:269:in `synchronize'
E, [2024-08-29T13:48:07.136638 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:269:in `send_command'
E, [2024-08-29T13:48:07.136646 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/commands/strings.rb:191:in `get'
E, [2024-08-29T13:48:07.136656 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `public_send'
E, [2024-08-29T13:48:07.136665 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `block (3 levels) in <class:DiscourseRedis>'
E, [2024-08-29T13:48:07.136674 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:29:in `ignore_readonly'
E, [2024-08-29T13:48:07.136682 #74] ERROR -- : /var/www/discourse/lib/discourse_redis.rb:148:in `block (2 levels) in <class:DiscourseRedis>'
E, [2024-08-29T13:48:07.136690 #74] ERROR -- : /var/www/discourse/app/jobs/regular/run_heartbeat.rb:16:in `last_heartbeat'
E, [2024-08-29T13:48:07.136698 #74] ERROR -- : config/unicorn.conf.rb:182:in `check_sidekiq_heartbeat'
E, [2024-08-29T13:48:07.136707 #74] ERROR -- : config/unicorn.conf.rb:269:in `master_sleep'
E, [2024-08-29T13:48:07.136716 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:295:in `join'
E, [2024-08-29T13:48:07.136735 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/bin/unicorn:128:in `<top (required)>'
E, [2024-08-29T13:48:07.136744 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
E, [2024-08-29T13:48:07.136752 #74] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'
1 me gusta

¿Valdrá la pena reiniciar el servidor?

1 me gusta

Mi entendimiento es que cada contenedor de ‘app’ aloja su propio servidor Redis, por lo que no debería haber forma de que un contenedor de Docker pueda ‘contaminar’ el servidor Redis de otro contenedor de Docker.

Al revisar los registros, todos los registros antiguos parecen ser muy aburridos, algunos de solo 20 bytes de longitud.

Solo para hoy, hay 8.1 MB de mensajes de error en unicorn.stdout.log-20240829, que supongo que fue rotado y se acerca rápidamente a otros 8 MB en unicorn.stdout.log.

Errores correspondientes en stderr.log:

Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Error connecting to Redis on localhost:6379 (Redis::TimeoutError)

Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Job exception: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
sidekiq-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Job exception: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
sidekiq-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 2 [default] Unexpected error in MiniSchedulerLongRunningJobLogger thread : Redis::CannotConnectError : Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
discourse-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Job exception: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
sidekiq-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] heartbeat: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)

Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Job exception: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
sidekiq-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Job exception: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
sidekiq-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] Error connecting to Redis on localhost:6379 (Redis::TimeoutError)

Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 2 [default] Unexpected error in MiniSchedulerLongRunningJobLogger thread : Redis::CannotConnectError : Error connecting to Redis on localhost:6379 (Redis::TimeoutError)
discourse-exception
Failed to report error: Error connecting to Redis on localhost:6379 (Redis::TimeoutError) 3 [default] heartbeat: Error connecting to Redis on localhost:6379 (Redis::TimeoutError)

El momento parece coincidir con la actualización.

¡Qué tonto soy, los registros de errores continuos se deben a que la instancia de Docker actualizada todavía se estaba ejecutando! Detuve la instancia de Docker actualizada y los registros volvieron a estar en silencio.

Veré si reiniciarla produce los mismos errores.

Creo que es el mismo error que este: Error connecting to Redis

¿Quizás es algún tipo de condición de carrera al inicio?

Aparte, ¿podríamos añadir marcas de tiempo a unicorn.stderr.log? ¡Eso ayudaría a depurar!

¿Se guardan los registros de reconstrucción del lanzador en algún lugar? Quería revisarlos para ver si había alguna pista de por qué sucedió esto.

No, creo que solo está imprimiendo en la consola.

Podrías hacer

./launcher rebuild app > buildlog.txt

La próxima vez.

Pero utilizan el mismo almacenamiento compartido, así que si no detienes el nuevo, sucederán cosas malas.

Unfortunately, after running for half a day the same problem happened.

Here the docker was rebooted and at the line Thu Aug 29 15:58:34 UTC 2024 was switched live:

I, [2024-08-29T14:13:32.515947 #68]  INFO -- : Refreshing Gem list
I, [2024-08-29T14:13:36.905305 #68]  INFO -- : listening on addr=127.0.0.1:3000 fd=15
I, [2024-08-29T14:13:42.283988 #68]  INFO -- : starting 1 supervised sidekiqs
I, [2024-08-29T14:13:46.693381 #317]  INFO -- : Loading Sidekiq in process id 317
I, [2024-08-29T14:13:47.963367 #356]  INFO -- : worker=0 ready
I, [2024-08-29T14:13:49.811765 #420]  INFO -- : worker=1 ready
I, [2024-08-29T14:13:50.990474 #68]  INFO -- : master process ready
I, [2024-08-29T14:13:51.394171 #538]  INFO -- : worker=2 ready
I, [2024-08-29T14:13:52.632528 #688]  INFO -- : worker=3 ready
I, [2024-08-29T14:13:53.468952 #824]  INFO -- : worker=4 ready
I, [2024-08-29T14:13:54.848829 #945]  INFO -- : worker=5 ready
I, [2024-08-29T14:13:55.276090 #1084]  INFO -- : worker=6 ready
I, [2024-08-29T14:13:55.663733 #1206]  INFO -- : worker=7 ready
*****Thu Aug 29 15:58:34 UTC 2024*****
I, [2024-08-29T16:19:42.701388 #2847461]  INFO -- : Loading Sidekiq in process id 2847461
[STARTED]
'system' has started the backup!
Marking backup as running...

I’m not sure why after Loading Sidekiq in process id 2847461 there were no thte same INFO worker= lines and whether this is significant.

The container ran fine until around midnight when the back upstarted. Here’s the log after the backup is completed (note there is plenty of RAM and disk space):

Finalizing backup...
Creating archive: discourse-2024-08-29-033737-v20240731190511.tar.gz
Making sure archive does not already exist...
Creating empty archive...
Archiving data dump...
Archiving uploads...
Removing tmp '/var/www/discourse/tmp/backups/default/2024-08-29-033737' directory...
Gzipping archive, this may take a while...
Executing the after_create_hook for the backup...
Deleting old backups...
Cleaning stuff up...
Removing '.tar' leftovers...
Marking backup as finished...
Refreshing disk stats...
Finished!
[SUCCESS]
I, [2024-08-29T07:39:13.994558 #74]  INFO -- : Refreshing Gem list
I, [2024-08-29T07:39:22.139288 #74]  INFO -- : listening on addr=127.0.0.1:3000 fd=15
I, [2024-08-29T07:39:28.538100 #74]  INFO -- : starting 1 supervised sidekiqs
I, [2024-08-29T07:39:32.817733 #338]  INFO -- : Loading Sidekiq in process id 338
I, [2024-08-29T07:39:33.888697 #377]  INFO -- : worker=0 ready
I, [2024-08-29T07:39:35.956335 #449]  INFO -- : worker=1 ready
I, [2024-08-29T07:39:37.233970 #74]  INFO -- : master process ready
I, [2024-08-29T07:39:37.437223 #575]  INFO -- : worker=2 ready
I, [2024-08-29T07:39:38.974154 #718]  INFO -- : worker=3 ready
I, [2024-08-29T07:39:39.789086 #842]  INFO -- : worker=4 ready
I, [2024-08-29T07:39:40.417719 #960]  INFO -- : worker=5 ready
I, [2024-08-29T07:39:40.993249 #1084]  INFO -- : worker=6 ready
I, [2024-08-29T07:39:41.395415 #1236]  INFO -- : worker=7 ready
I, [2024-08-29T07:43:38.285891 #74]  INFO -- : master reopening logs...
I, [2024-08-29T07:43:38.288650 #2790085]  INFO -- : Sidekiq reopening logs...
I, [2024-08-29T07:43:38.658839 #454]  INFO -- : worker=1 reopening logs...
I, [2024-08-29T07:43:53.315240 #845]  INFO -- : worker=4 reopening logs...
I, [2024-08-29T07:43:53.315258 #1121415]  INFO -- : worker=0 reopening logs...
I, [2024-08-29T07:43:53.318146 #720]  INFO -- : worker=3 reopening logs...
I, [2024-08-29T07:43:53.318152 #1256]  INFO -- : worker=7 reopening logs...
I, [2024-08-29T07:43:53.319042 #586]  INFO -- : worker=2 reopening logs...
I, [2024-08-29T07:43:53.322139 #960]  INFO -- : worker=5 reopening logs...
I, [2024-08-29T07:43:53.330282 #1109]  INFO -- : worker=6 reopening logs...
E, [2024-08-29T09:24:44.381913 #377] ERROR -- : app error: Connection timed out (Redis::TimeoutError)
E, [2024-08-29T09:24:44.382304 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/connectio
n/ruby.rb:58:in `block in _read_from_socket'
E, [2024-08-29T09:24:44.382516 #377] ERROR -- : <internal:kernel>:187:in `loop'
E, [2024-08-29T09:24:44.382536 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/connectio
n/ruby.rb:54:in `_read_from_socket'
E, [2024-08-29T09:24:44.382687 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/connectio
n/ruby.rb:47:in `gets'
E, [2024-08-29T09:24:44.382782 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/connection/ruby.rb:382:in `read'
E, [2024-08-29T09:24:44.383064 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:311:in `block in read'
E, [2024-08-29T09:24:44.383269 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:299:in `io'
E, [2024-08-29T09:24:44.383293 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:310:in `read'
E, [2024-08-29T09:24:44.383304 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:161:in `block in call'
E, [2024-08-29T09:24:44.383313 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:279:in `block (2 levels) in process'
E, [2024-08-29T09:24:44.383323 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:411:in `ensure_connected'
E, [2024-08-29T09:24:44.383332 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'
E, [2024-08-29T09:24:44.383341 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'
E, [2024-08-29T09:24:44.383349 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'
E, [2024-08-29T09:24:44.383358 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/client.rb:161:in `call'
E, [2024-08-29T09:24:44.383366 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-mini-profiler-3.3.1/lib/mini_profiler/profiling_methods.rb:89:in `block in profile_method'
E, [2024-08-29T09:24:44.383375 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:270:in `block in send_command'
E, [2024-08-29T09:24:44.383384 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:269:in `synchronize'
E, [2024-08-29T09:24:44.383407 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis.rb:269:in `send_command'
E, [2024-08-29T09:24:44.383416 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/redis-4.8.1/lib/redis/commands/lists.rb:11:in `llen'
E, [2024-08-29T09:24:44.383425 #377] ERROR -- : /var/www/discourse/lib/rate_limiter.rb:192:in `is_under_limit?'
E, [2024-08-29T09:24:44.383434 #377] ERROR -- : /var/www/discourse/lib/rate_limiter.rb:72:in `can_perform?'
E, [2024-08-29T09:24:44.383443 #377] ERROR -- : /var/www/discourse/lib/middleware/request_tracker.rb:488:in `rate_limit'
E, [2024-08-29T09:24:44.383453 #377] ERROR -- : /var/www/discourse/lib/middleware/request_tracker.rb:322:in `call'
E, [2024-08-29T09:24:44.383461 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/actionpack-7.1.4/lib/action_dispatch/middleware/remote_ip.rb:92:in `call'
E, [2024-08-29T09:24:44.383470 #377] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-6.1.0/lib/rails_multisite/middleware.rb:26:in `call'
E, [2024-08-30T05:11:44.698172 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-7.1.4/lib/rails/engine.rb:536:in `call'
E, [2024-08-30T05:11:44.698180 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-7.1.4/lib/rails/railtie.rb:226:in `public_send'
E, [2024-08-30T05:11:44.698189 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-7.1.4/lib/rails/railtie.rb:226:in `method_missing'
E, [2024-08-30T05:11:44.698199 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-2.2.9/lib/rack/urlmap.rb:74:in `block in call'
E, [2024-08-30T05:11:44.698207 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-2.2.9/lib/rack/urlmap.rb:58:in `each'
E, [2024-08-30T05:11:44.698216 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rack-2.2.9/lib/rack/urlmap.rb:58:in `call'
E, [2024-08-30T05:11:44.698225 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:634:in `process_client'
E, [2024-08-30T05:11:44.698233 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:739:in `worker_loop'
E, [2024-08-30T05:11:44.698241 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:547:in `spawn_missing_workers'
E, [2024-08-30T05:11:44.698261 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/lib/unicorn/http_server.rb:143:in `start'
E, [2024-08-30T05:11:44.698270 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/unicorn-6.1.0/bin/unicorn:128:in `<top (required)>'
E, [2024-08-30T05:11:44.698291 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `load'
E, [2024-08-30T05:11:44.698300 #420] ERROR -- : /var/www/discourse/vendor/bundle/ruby/3.3.0/bin/unicorn:25:in `<main>'

After sidekiq is restarted after the backup, within 2 hours the Redis timeout errors begin. Not sure why within 2 hours. Perhaps this could have been the first loading of the site in the morning.

Any ideas? I thought monitoring the startup of the container would be enough, but if the error can be re-triggered after start/stop of sidekiq post-backup, then I need another strategy.

I also found this:

26 E, [2024-08-29T09:25:58.612452 #74] ERROR -- : worker=3 PID:718 running too long (29s), sending USR2 to dump thread backtraces
   27 E, [2024-08-29T09:25:58.612563 #74] ERROR -- : worker=4 PID:842 running too long (29s), sending USR2 to dump thread backtraces
   28 E, [2024-08-29T09:25:58.612587 #74] ERROR -- : worker=5 PID:960 running too long (29s), sending USR2 to dump thread backtraces
   29 E, [2024-08-29T09:25:59.613766 #74] ERROR -- : worker=3 PID:718 running too long (30s), sending USR2 to dump thread backtraces
   30 E, [2024-08-29T09:25:59.613874 #74] ERROR -- : worker=4 PID:842 running too long (30s), sending USR2 to dump thread backtraces
   31 E, [2024-08-29T09:25:59.613897 #74] ERROR -- : worker=5 PID:960 running too long (30s), sending USR2 to dump thread backtraces
   32 E, [2024-08-29T09:25:59.613931 #74] ERROR -- : worker=6 PID:1084 running too long (29s), sending USR2 to dump thread backtraces

¿Hay alguna forma de que Redis utilice una ubicación diferente para el almacenamiento, por ejemplo, para /var/log/redis, de modo que sea posible tener un contenedor en “espera activa”?

Para tener un contenedor en espera, querrás contenedores de datos y web separados. Entonces, múltiples contenedores pueden ejecutarse contra la misma base de datos y Redis.

Ya uso una base de datos externa, así que dividiría el contenedor que actualmente es web y redis en contenedores web y redis separados y luego tendría copias de seguridad en caliente para la web.

Por lo tanto, efectivamente, contenedores compartidos de DB y redis y múltiples contenedores web.

1 me gusta

¿Tiene experiencia separando Redis y es una configuración que recomendaría o desaconsejaría? Soy un poco reacio a alejarme demasiado de lo estándar y no estoy seguro de si me encontraré con alguna trampa.

Ya has separado la base de datos.

Sí, lo he hecho antes, tanto usando Redis administrado como con un contenedor solo de Redis.

2 Me gusta