Excepción de trabajo: JavaScript fue terminado (ya sea por tiempo de espera o explícitamente)

Acabo de migrar una instalación antigua de Discourse (1.9.0.beta5) a un droplet nuevo en Digital Ocean con todo actualizado a la última versión. Dado lo antigua que era, no estaba seguro de que todo se transfiriera correctamente, pero parece funcionar bien.

Excepto que se cae cada pocas horas.

Este es el registro de error que obtuve a las 12:57 a. m. de hoy:

Mensaje

Excepción del trabajo: JavaScript fue terminado (ya sea por tiempo de espera o explícitamente)

Rastreo

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:201:in `eval_unsafe'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:201:in `block (2 levels) in eval'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:307:in `timeout'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:200:in `block in eval'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:198:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:198:in `eval'
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:122:in `block in module_transpile'
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:81:in `block in protect'
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:80:in `synchronize'
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:80:in `protect'
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:115:in `module_transpile'
/var/www/discourse/lib/pretty_text.rb:42:in `apply_es6_file'
/var/www/discourse/lib/pretty_text.rb:61:in `block in ctx_load_manifest'
/var/www/discourse/lib/pretty_text.rb:58:in `each_line'
/var/www/discourse/lib/pretty_text.rb:58:in `ctx_load_manifest'
/var/www/discourse/lib/pretty_text.rb:83:in `create_es6_context'
/var/www/discourse/lib/pretty_text.rb:124:in `block in v8'
/var/www/discourse/lib/pretty_text.rb:122:in `synchronize'
/var/www/discourse/lib/pretty_text.rb:122:in `v8'
/var/www/discourse/lib/pretty_text.rb:144:in `block in markdown'
/var/www/discourse/lib/pretty_text.rb:411:in `block in protect'
/var/www/discourse/lib/pretty_text.rb:410:in `synchronize'
/var/www/discourse/lib/pretty_text.rb:410:in `protect'
/var/www/discourse/lib/pretty_text.rb:143:in `markdown'
/var/www/discourse/lib/pretty_text.rb:257:in `cook'
/var/www/discourse/app/models/post_analyzer.rb:33:in `cook'
/var/www/discourse/app/models/post.rb:289:in `cook'
/var/www/discourse/lib/cooked_post_processor.rb:30:in `initialize'
/var/www/discourse/app/jobs/regular/process_post.rb:25:in `new'
/var/www/discourse/app/jobs/regular/process_post.rb:25:in `execute'
/var/www/discourse/app/jobs/base.rb:232:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/rails_multisite-2.0.7/lib/rails_multisite/connection_management.rb:63:in `with_connection'
/var/www/discourse/app/jobs/base.rb:221:in `block in perform'
/var/www/discourse/app/jobs/base.rb:217:in `each'
/var/www/discourse/app/jobs/base.rb:217:in `perform'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
/var/www/discourse/lib/sidekiq/pausable.rb:138:in `call'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq.rb:37:in `block in <module:Sidekiq>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'

Entorno

|hostname|dow-18-app|
| --- | --- |
|process_id|8390|
|application_version|800e49f16e43e0783d30971e84a4e4619d448a7c|
|current_db|default|
|current_hostname|forum.driveonwood.com|
|job|Jobs::ProcessPost|
|problem_db|default|
||opts null

post_id 94118
--- ---
bypass_bump true
new_post false
current_site_id default|

El sitio no respondía cuando intenté cargarlo esta mañana; obtuve un error 504. El servidor seguía ejecutándose y el registro en ./launcher log app mostraba repetidamente esto:

ok: run: redis: (pid 48) 11912s
ok: run: postgres: (pid 47) 11912s
supervisor pid: 26654 unicorn pid: 26658
config/unicorn_launcher: línea 71: kill: (26658) - No existe el proceso
config/unicorn_launcher: línea 15: kill: (26658) - No existe el proceso
(26654) saliendo

Después de reiniciar a las 9 a. m., el sitio funcionó correctamente durante unas horas, pero luego se volvió a caer a las 11:04 a. m. Los registros de error son sustancialmente los mismos en ambos casos. Reiniciar de nuevo resolvió el problema (por ahora). Seguiré vigilándolo y reiniciaré según sea necesario hasta que resuelva esto.

El sitio está aquí: http://forum.driveonwood.com

1 me gusta

This is likely load related, how is CPU looking on your server? How is memory?

3 Me gusta

Site seems to be stable since those two crashes. CPU and memory use are normal. I don’t know what they were during the downtime. Perhaps there was some heavy load initially due to the transition.

I’ll update this thread if anything happens.

Just crashed. I rebooted the server again, and it is currently using 70-80% CPU. It is converting jpgs for some reason, and we have a lot of photos. I’ll reboot as need, hopefully this is a one-time task.

1 me gusta

Crashed again today. Before restarting, I noted 97% of CPU is going to this process:

ruby /var/www/discourse/vendor/bundle/ruby/2.6.0/bin/unicorn -E production -c config/unicorn.conf.rb

Overall memory usage:

  3073648 K total memory
  904712 K used memory
 1510576 K active memory
  376240 K inactive memory
  815448 K free memory
  306872 K buffer memory
 1046616 K swap cache
       0 K total swap
       0 K used swap
       0 K free swap

Any insights on what this particular process does?

The way that images are stored changed a while back and they’re all being processed. There’s stuff that’s supposed to keep it from crashing x but it’s not enough for you. I would recommend increasing your ram for a while until they’re all done.

I think that there is a site setting that controls how many images are processed in a batch, but can’t quite remember what it is this instant.

3 Me gusta

Thanks. I suspected something like that. We have 15 GB of images.

I’ll keep an eye on it, and just reboot as needed.

2 Me gusta

More specifically, the srcset attribute is now supported on the <img> tag so retina devices see higher resolution images. This means more copies of images as a side effect, because you have multiple display densities to serve with the same image.

See https://html.com/attributes/img-srcset/

3 Me gusta

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.