Job exception: JavaScript was terminated (either by timeout or explicitly)

I’ve just transitioned an older Discourse install (1.9.0.beta5) to a fresh droplet on Digital Ocean with latest everything. Given how old it was, I wasn’t sure everything would come over correctly but it seems to work fine.

Except it crashes every few hours.

This is the error log I got at 12:57am today:

Message

Job exception: JavaScript was terminated (either by timeout or explicitly)

Backtrace

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:201:in `eval_unsafe' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:201:in `block (2 levels) in eval' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:307:in `timeout' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:200:in `block in eval' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:198:in `synchronize' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/mini_racer-0.2.6/lib/mini_racer.rb:198:in `eval' 
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:122:in `block in module_transpile' 
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:81:in `block in protect' 
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:80:in `synchronize' 
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:80:in `protect' 
/var/www/discourse/lib/es6_module_transpiler/tilt/es6_module_transpiler_template.rb:115:in `module_transpile' 
/var/www/discourse/lib/pretty_text.rb:42:in `apply_es6_file' 
/var/www/discourse/lib/pretty_text.rb:61:in `block in ctx_load_manifest' 
/var/www/discourse/lib/pretty_text.rb:58:in `each_line' 
/var/www/discourse/lib/pretty_text.rb:58:in `ctx_load_manifest' /var/www/discourse/lib/pretty_text.rb:83:in `create_es6_context' 
/var/www/discourse/lib/pretty_text.rb:124:in `block in v8' 
/var/www/discourse/lib/pretty_text.rb:122:in `synchronize' 
/var/www/discourse/lib/pretty_text.rb:122:in `v8' 
/var/www/discourse/lib/pretty_text.rb:144:in `block in markdown' 
/var/www/discourse/lib/pretty_text.rb:411:in `block in protect' 
/var/www/discourse/lib/pretty_text.rb:410:in `synchronize' 
/var/www/discourse/lib/pretty_text.rb:410:in `protect' 
/var/www/discourse/lib/pretty_text.rb:143:in `markdown' 
/var/www/discourse/lib/pretty_text.rb:257:in `cook' 
/var/www/discourse/app/models/post_analyzer.rb:33:in `cook' 
/var/www/discourse/app/models/post.rb:289:in `cook' 
/var/www/discourse/lib/cooked_post_processor.rb:30:in `initialize' 
/var/www/discourse/app/jobs/regular/process_post.rb:25:in `new' 
/var/www/discourse/app/jobs/regular/process_post.rb:25:in `execute' 
/var/www/discourse/app/jobs/base.rb:232:in `block (2 levels) in perform' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/rails_multisite-2.0.7/lib/rails_multisite/connection_management.rb:63:in `with_connection' 
/var/www/discourse/app/jobs/base.rb:221:in `block in perform' 
/var/www/discourse/app/jobs/base.rb:217:in `each' 
/var/www/discourse/app/jobs/base.rb:217:in `perform' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:192:in `execute_job' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:165:in `block (2 levels) in process' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:128:in `block in invoke' 
/var/www/discourse/lib/sidekiq/pausable.rb:138:in `call' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:130:in `block in invoke' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/middleware/chain.rb:133:in `invoke' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:164:in `block in process' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:137:in `block (6 levels) in dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:109:in `local' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:136:in `block (5 levels) in dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq.rb:37:in `block in <module:Sidekiq>' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:132:in `block (4 levels) in dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:250:in `stats' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:127:in `block (3 levels) in dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/job_logger.rb:8:in `call' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:126:in `block (2 levels) in dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/job_retry.rb:74:in `global' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:125:in `block in dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:48:in `with_context' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/logging.rb:42:in `with_job_hash_context' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:124:in `dispatch' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:163:in `process' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:83:in `process_one' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/processor.rb:71:in `run' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:16:in `watchdog' 
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/sidekiq-5.2.7/lib/sidekiq/util.rb:25:in `block in safe_thread'

Env

|hostname|dow-18-app|
| --- | --- |
|process_id|8390|
|application_version|800e49f16e43e0783d30971e84a4e4619d448a7c|
|current_db|default|
|current_hostname|forum.driveonwood.com|
|job|Jobs::ProcessPost|
|problem_db|default|
||opts null

post_id 94118
--- ---
bypass_bump true
new_post false
current_site_id default|

The site was unresponsive when I tried to load it this morning, I got a 504 error. The server was still running, and the log in ./launcher log app showed this repeatedly:

ok: run: redis: (pid 48) 11912s
ok: run: postgres: (pid 47) 11912s
supervisor pid: 26654 unicorn pid: 26658
config/unicorn_launcher: line 71: kill: (26658) - No such process
config/unicorn_launcher: line 15: kill: (26658) - No such process
(26654) exiting

After rebooting at 9am, the site ran OK for a few hours, then it crashed again at 11:04am. Substantially the same error logs in both places. Rebooting again solved the issue (for now). I’ll keep an eye on it, and reboot as needed until I get this solved.

Site is here: http://forum.driveonwood.com

إعجاب واحد (1)

This is likely load related, how is CPU looking on your server? How is memory?

3 إعجابات

Site seems to be stable since those two crashes. CPU and memory use are normal. I don’t know what they were during the downtime. Perhaps there was some heavy load initially due to the transition.

I’ll update this thread if anything happens.

Just crashed. I rebooted the server again, and it is currently using 70-80% CPU. It is converting jpgs for some reason, and we have a lot of photos. I’ll reboot as need, hopefully this is a one-time task.

إعجاب واحد (1)

Crashed again today. Before restarting, I noted 97% of CPU is going to this process:

ruby /var/www/discourse/vendor/bundle/ruby/2.6.0/bin/unicorn -E production -c config/unicorn.conf.rb

Overall memory usage:

  3073648 K total memory
  904712 K used memory
 1510576 K active memory
  376240 K inactive memory
  815448 K free memory
  306872 K buffer memory
 1046616 K swap cache
       0 K total swap
       0 K used swap
       0 K free swap

Any insights on what this particular process does?

The way that images are stored changed a while back and they’re all being processed. There’s stuff that’s supposed to keep it from crashing x but it’s not enough for you. I would recommend increasing your ram for a while until they’re all done.

I think that there is a site setting that controls how many images are processed in a batch, but can’t quite remember what it is this instant.

3 إعجابات

Thanks. I suspected something like that. We have 15 GB of images.

I’ll keep an eye on it, and just reboot as needed.

إعجابَين (2)

More specifically, the srcset attribute is now supported on the <img> tag so retina devices see higher resolution images. This means more copies of images as a side effect, because you have multiple display densities to serve with the same image.

See https://html.com/attributes/img-srcset/

3 إعجابات

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.