No errors, instance not coming back up after rebuild

For the first time in years after a rebuild my instance is not coming back up. My last successful rebuild was yesterday.

Is there maybe a fresh commit that may be the reason for this?

What are the symptoms?

What do you see if you

tail -f /var/discourse/shared/standalone/log/rails/production.log
Done in 89.60s.
Downloading MaxMindDB...
Compressing Javascript and Generating Source Maps

I, [2023-03-08T17:11:55.996529 #1]  INFO -- : File > /usr/local/bin/discourse  chmod: +x  chown: 
I, [2023-03-08T17:11:55.999205 #1]  INFO -- : File > /usr/local/bin/rails  chmod: +x  chown: 
I, [2023-03-08T17:11:56.001899 #1]  INFO -- : File > /usr/local/bin/rake  chmod: +x  chown: 
I, [2023-03-08T17:11:56.004616 #1]  INFO -- : File > /usr/local/bin/rbtrace  chmod: +x  chown: 
I, [2023-03-08T17:11:56.007306 #1]  INFO -- : File > /usr/local/bin/stackprof  chmod: +x  chown: 
I, [2023-03-08T17:11:56.010710 #1]  INFO -- : File > /etc/update-motd.d/10-web  chmod: +x  chown: 
I, [2023-03-08T17:11:56.012746 #1]  INFO -- : File > /etc/logrotate.d/rails  chmod:   chown: 
I, [2023-03-08T17:11:56.014343 #1]  INFO -- : File > /etc/logrotate.d/nginx  chmod:   chown: 
I, [2023-03-08T17:11:56.017963 #1]  INFO -- : File > /etc/runit/1.d/00-ensure-links  chmod: +x  chown: 
I, [2023-03-08T17:11:56.020609 #1]  INFO -- : File > /etc/runit/1.d/01-cleanup-web-pids  chmod: +x  chown: 
I, [2023-03-08T17:11:56.023663 #1]  INFO -- : File > /root/.bash_profile  chmod: 644  chown: 
I, [2023-03-08T17:11:56.026021 #1]  INFO -- : File > /usr/local/etc/ImageMagick-7/policy.xml  chmod:   chown: 
I, [2023-03-08T17:11:56.026795 #1]  INFO -- : Replacing (?-mix:server.+{) with limit_req_zone $binary_remote_addr zone=flood:10m rate=$reqs_per_secondr/s;
limit_req_zone $binary_remote_addr zone=bot:10m rate=$reqs_per_minuter/m;
limit_req_status 429;
limit_conn_zone $binary_remote_addr zone=connperip:10m;
limit_conn_status 429;
server {
 in /etc/nginx/conf.d/discourse.conf
I, [2023-03-08T17:11:56.026984 #1]  INFO -- : Replacing (?-mix:location @discourse {) with location @discourse {
  limit_conn connperip $conn_per_ip;
  limit_req zone=flood burst=$burst_per_second nodelay;
  limit_req zone=bot burst=$burst_per_minute nodelay; in /etc/nginx/conf.d/discourse.conf
I, [2023-03-08T17:11:56.029658 #1]  INFO -- : File > /etc/runit/1.d/remove-old-socket  chmod: +x  chown: 
I, [2023-03-08T17:11:56.032272 #1]  INFO -- : File > /etc/runit/3.d/remove-old-socket  chmod: +x  chown: 
I, [2023-03-08T17:11:56.032398 #1]  INFO -- : Replacing (?-mix:listen 80;) with listen unix:/shared/nginx.http.sock;
set_real_ip_from unix:;
 in /etc/nginx/conf.d/discourse.conf
I, [2023-03-08T17:11:56.032577 #1]  INFO -- : Replacing (?-mix:listen 443 ssl http2;) with listen unix:/shared/nginx.https.sock ssl http2;
set_real_ip_from unix:; in /etc/nginx/conf.d/discourse.conf
I, [2023-03-08T17:11:56.035350 #1]  INFO -- : File > /tmp/add-cloudflare-ips  chmod: +x  chown: 
I, [2023-03-08T17:11:56.035435 #1]  INFO -- : > /tmp/add-cloudflare-ips
I, [2023-03-08T17:11:56.359453 #1]  INFO -- : CloudFlare IPs:
set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.21.244.0/22; set_real_ip_from 103.22.200.0/22; set_real_ip_from 103.31.4.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 197.234.240.0/22; set_real_ip_from 198.41.128.0/17; set_real_ip_from 162.158.0.0/15; set_real_ip_from 104.16.0.0/13; set_real_ip_from 104.24.0.0/14; set_real_ip_from 172.64.0.0/13; set_real_ip_from 131.0.72.0/22; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; set_real_ip_from 2405:b500::/32; set_real_ip_from 2405:8100::/32; set_real_ip_from 2a06:98c0::/29; set_real_ip_from 2c0f:f248::/32;

I, [2023-03-08T17:11:56.359655 #1]  INFO -- : > rm /tmp/add-cloudflare-ips
I, [2023-03-08T17:11:56.361599 #1]  INFO -- : 
I, [2023-03-08T17:11:56.361818 #1]  INFO -- : > echo "Beginning of custom commands"
I, [2023-03-08T17:11:56.363535 #1]  INFO -- : Beginning of custom commands

I, [2023-03-08T17:11:56.367829 #1]  INFO -- : File > /etc/service/monerochan_merchant_rpc/run  chmod: +x  chown: 
I, [2023-03-08T17:11:56.368034 #1]  INFO -- : > echo "End of custom commands"
I, [2023-03-08T17:11:56.369958 #1]  INFO -- : End of custom commands

I, [2023-03-08T17:11:56.370117 #1]  INFO -- : Terminating async processes
I, [2023-03-08T17:11:56.370225 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 42
I, [2023-03-08T17:11:56.370261 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 103
2023-03-08 17:11:56.370 UTC [42] LOG:  received fast shutdown request
103:signal-handler (1678295516) Received SIGTERM scheduling shutdown...
2023-03-08 17:11:56.372 UTC [42] LOG:  aborting any active transactions
2023-03-08 17:11:56.374 UTC [42] LOG:  background worker "logical replication launcher" (PID 51) exited with exit code 1
2023-03-08 17:11:56.375 UTC [46] LOG:  shutting down
2023-03-08 17:11:56.392 UTC [42] LOG:  database system is shut down
103:M 08 Mar 2023 17:11:56.469 # User requested shutdown...
103:M 08 Mar 2023 17:11:56.469 * Saving the final RDB snapshot before exiting.
103:M 08 Mar 2023 17:11:56.570 * DB saved on disk
103:M 08 Mar 2023 17:11:56.570 # Redis is now ready to exit, bye bye...
sha256:422bd26e098f3af0623647ebce02770ac1608bfac07260aeb5469ab975696363
a0b91a9cc45e8666352e172143854705faa97b38208fcfe0650ea929989b8570
Removing old container
+ /usr/bin/docker rm app
app
âžś  discourse git:(main) âś— tail -f /var/discourse/shared/standalone/log/rails/production.log


Bye!
Deprecation notice: (siwe) full_screen_login is now forced. The full_screen_login parameter can be removed from the auth_provider. (removal in Discourse 2.9.0) 
At /var/www/discourse/lib/plugin/instance.rb:763:in `public_send`
Migrating to MakeChatMentionNotificationIdNullable (20230227172543)
Migrating to DropBadgeGrantedTitleColumn (20230228105851)
Migrating to AddExternalToSidebarUrls (20230303015952)
Theme setting type has changed but cannot be converted. 

 #<ThemeSettingsManager::Upload:0x00007f60ff714e38 @name=:background_image, @default="", @theme=#<Theme id: 31, name: "Search Banner", user_id: 2, created_at: "2021-08-03 16:38:09.042735000 +0000", updated_at: "2021-08-03 17:01:41.329058000 +0000", compiler_version: 0, user_selectable: false, hidden: false, color_scheme_id: nil, remote_theme_id: 29, component: true, enabled: true, auto_update: true>, @opts={:description=>"background image for the banner", :textarea=>false, :json_schema=>nil, :refresh=>false}, @types={:integer=>0, :float=>1, :string=>2, :bool=>3, :list=>4, :enum=>5, :upload=>6}>
Bye!
Deprecation notice: (siwe) full_screen_login is now forced. The full_screen_login parameter can be removed from the auth_provider. (removal in Discourse 2.9.0) 
At /var/www/discourse/lib/plugin/instance.rb:763:in `public_send`
Migrating to MakeChatMentionNotificationIdNullable (20230227172543)
Migrating to DropBadgeGrantedTitleColumn (20230228105851)
Migrating to AddExternalToSidebarUrls (20230303015952)
Theme setting type has changed but cannot be converted. 

 #<ThemeSettingsManager::Upload:0x00007f60ff714e38 @name=:background_image, @default="", @theme=#<Theme id: 31, name: "Search Banner", user_id: 2, created_at: "2021-08-03 16:38:09.042735000 +0000", updated_at: "2021-08-03 17:01:41.329058000 +0000", compiler_version: 0, user_selectable: false, hidden: false, color_scheme_id: nil, remote_theme_id: 29, component: true, enabled: true, auto_update: true>, @opts={:description=>"background image for the banner", :textarea=>false, :json_schema=>nil, :refresh=>false}, @types={:integer=>0, :float=>1, :string=>2, :bool=>3, :list=>4, :enum=>5, :upload=>6}>

I’m not seeing anything out of the ordinary. Perhaps the nginx lines are troubling? (my nginx setup is from official discourse maintenance page tutorial).

What happens if you visit your site? Can you share the URL?

1 Like

Rather not in public, but I’ll shoot you a DM.

Edit: rerouted nginx to ignore 502 errors etc. Just a bad gateway error.

image

1 Like

You’ll need to look at the log file when something tries to load the site and see what the error is.

If cloudflare (or whatever) is stopping a browser from accessing the site then maybe you can look through the logs to look for a 500 error to see what the issue is.

You might be able to curl localhost from inside the container.

1 Like

That is some good feedback. Will try that and get back.

EDIT: Hmm, I don’t think anything happens in the logs, but I’m seeing the following.

âžś  ~ tail -f /var/discourse/shared/standalone/log/rails/production.log
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/subscribe.rb:14:in `subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:288:in `_subscription'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/pubsub.rb:20:in `block in subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `block in synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/pubsub.rb:19:in `subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:302:in `global_subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:768:in `global_subscribe_thread'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:739:in `block in new_subscriber_thread'

In my nginx I’m seeing:

023/03/08 17:58:38 [crit] 115962#115962: *448 connect() to unix:/var/discourse/shared/standalone/nginx.http.sock failed (2: No such file or directory) while connecting to upstream, client: IP.XXX server: [domain.com](http://domain.com), request: "GET /service-worker.js HTTP/2.0", upstream: "http://unix:/var/discourse/shared/standalone/nginx.http.sock:/service-worker.js", host: "domain.com", referrer: "https://domain.com/service-worker.js"
2023/03/08 17:59:32 [notice] 318573#318573: signal process started

Does this help? Or should I completely revert the nginx setup for this?

2 Likes

If you share the full backtrace (a few more line of that file) we can help you.

1 Like
Completed 200 OK in 25ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 17734)
Started GET "/categories_and_latest" for [ip removed] at 2023-03-08 16:20:41 +0000
Processing by CategoriesController#categories_and_latest as JSON
  Rendered text template (Duration: 0.0ms | Allocations: 1)
Completed 200 OK in 112ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 63242)
Started GET "/" for [ip removed] at 2023-03-08 16:21:00 +0000
Processing by CategoriesController#index as HTML
  Rendered categories/index.html.erb within layouts/crawler (Duration: 1.4ms | Allocations: 1135)
  Rendered layout layouts/crawler.html.erb (Duration: 6.7ms | Allocations: 3536)
Completed 200 OK in 75ms (Views: 7.7ms | ActiveRecord: 0.0ms | Allocations: 41712)
Started GET "/notifications?limit=30&recent=true&bump_last_seen_reviewable=true" for [ip removed] at 2023-03-08 16:21:35 +0000
Processing by NotificationsController#index as JSON
  Parameters: {"limit"=>"30", "recent"=>"true", "bump_last_seen_reviewable"=>"true"}
Completed 200 OK in 60ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 30822)
Started GET "/" for [ip removed] at 2023-03-08 16:22:00 +0000
Processing by CategoriesController#index as HTML
  Rendered categories/index.html.erb within layouts/crawler (Duration: 1.1ms | Allocations: 1135)
  Rendered layout layouts/crawler.html.erb (Duration: 5.4ms | Allocations: 3536)
Completed 200 OK in 86ms (Views: 6.1ms | ActiveRecord: 0.0ms | Allocations: 41842)
Shutting down
Terminating quiet threads
Scheduler exiting...
Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL) subscribe failed, reconnecting in 1 second. Call stack /var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:409:in `ensure_connected'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:175:in `block in call_loop'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:331:in `with_socket_timeout'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:174:in `call_loop'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/subscribe.rb:44:in `subscription'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/subscribe.rb:14:in `subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:288:in `_subscription'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/pubsub.rb:20:in `block in subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `block in synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/pubsub.rb:19:in `subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:302:in `global_subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:768:in `global_subscribe_thread'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:739:in `block in new_subscriber_thread'
Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Failed to process job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL) ["/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:409:in `ensure_connected'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:161:in `call'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rack-mini-profiler-3.0.0/lib/mini_profiler/profiling_methods.rb:85:in `block in profile_method'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:270:in `block in send_command'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:269:in `synchronize'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:269:in `send_command'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/scripting.rb:110:in `_eval'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/scripting.rb:97:in `evalsha'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:463:in `cached_eval'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:150:in `publish'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:391:in `publish'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:751:in `block in new_subscriber_thread'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/timer_thread.rb:117:in `do_work'", "/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/timer_thread.rb:95:in `block (2 levels) in queue'"]
Pausing to allow jobs to finish...
heartbeat: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL) subscribe failed, reconnecting in 1 second. Call stack /var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:409:in `ensure_connected'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:161:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rack-mini-profiler-3.0.0/lib/mini_profiler/profiling_methods.rb:85:in `block in profile_method'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:270:in `block in send_command'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:269:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:269:in `send_command'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/strings.rb:191:in `get'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:401:in `process_global_backlog'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:286:in `block in global_subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:299:in `global_subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:768:in `global_subscribe_thread'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:739:in `block in new_subscriber_thread'
Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Bye!
Deprecation notice: (siwe) full_screen_login is now forced. The full_screen_login parameter can be removed from the auth_provider. (removal in Discourse 2.9.0) 
At /var/www/discourse/lib/plugin/instance.rb:763:in `public_send`
Migrating to MakeChatMentionNotificationIdNullable (20230227172543)
Migrating to DropBadgeGrantedTitleColumn (20230228105851)
Migrating to AddExternalToSidebarUrls (20230303015952)
Theme setting type has changed but cannot be converted. 

 #<ThemeSettingsManager::Upload:0x00007f60ff714e38 @name=:background_image, @default="", @theme=#<Theme id: 31, name: "Search Banner", user_id: 2, created_at: "2021-08-03 16:38:09.042735000 +0000", updated_at: "2021-08-03 17:01:41.329058000 +0000", compiler_version: 0, user_selectable: false, hidden: false, color_scheme_id: nil, remote_theme_id: 29, component: true, enabled: true, auto_update: true>, @opts={:description=>"background image for the banner", :textarea=>false, :json_schema=>nil, :refresh=>false}, @types={:integer=>0, :float=>1, :string=>2, :bool=>3, :list=>4, :enum=>5, :upload=>6}>
Deprecation notice: (siwe) full_screen_login is now forced. The full_screen_login parameter can be removed from the auth_provider. (removal in Discourse 2.9.0) 
At /var/www/discourse/lib/plugin/instance.rb:763:in `public_send`
Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL) subscribe failed, reconnecting in 1 second. Call stack /var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:398:in `rescue in establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:379:in `establish_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:115:in `block in connect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:344:in `with_reconnect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:114:in `connect'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:409:in `ensure_connected'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:269:in `block in process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:356:in `logging'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:268:in `process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:175:in `block in call_loop'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:331:in `with_socket_timeout'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/client.rb:174:in `call_loop'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/subscribe.rb:44:in `subscription'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/subscribe.rb:14:in `subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:288:in `_subscription'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/pubsub.rb:20:in `block in subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `block in synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis.rb:265:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/redis-4.8.1/lib/redis/commands/pubsub.rb:19:in `subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus/backends/redis.rb:302:in `global_subscribe'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:768:in `global_subscribe_thread'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/message_bus-4.3.2/lib/message_bus.rb:739:in `block in new_subscriber_thread'

This is from the moment before the “fatal” rebuild, up until after a few tries more with removed plugins etc.

I deleted the following plugins, and now my instance is booting up again.

So it’s probably related to recent commits of one of those.

1 Like

The error doesn’t get into the log because it happens rails cranks up:

root@test1-web-only:/var/www/discourse# rails c                                                                                                                                               
bundler: failed to load command: pry (/var/www/discourse/vendor/bundle/ruby/3.2.0/bin/pry)                                                                                                    
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/zeitwerk-2.6.7/lib/zeitwerk/loader/helpers.rb:135:in `const_get': uninitialized constant PluginInstance (NameError)                          
                                                                                                                                                                                              
    parent.const_get(cname, false)                                                                                                                                                            
          ^^^^^^^^^^                                                                                                                                                                          
Did you mean?  PluginStore                       

I have replicated the error on another install. I’m trying to figure out which plugin it is right now.

1 Like

Can you please share the plugin list and the each plugin commit?

I have figured out it’s a conflict of Automation and another plugin. It will rebuild with only automation, but not with other plugins enabled. I’m getting close to finding the other culprit…

2 Likes

Actually we had a bug in both automation and assign, and just fixed both. So rebuild now will fix it.

cc @pfaffman

4 Likes

Someone else just told me that

Here’s the list of plugins (plus one more that’s private)

          - git clone https://github.com/discourse/docker_manager.git
            #- git clone https://github.com/discourse/discourse-docs.git
            #- git clone https://github.com/discourse/discourse-solved.git
            #- git clone https://github.com/discourse/discourse-voting.git
            #- git clone https://github.com/discourse/discourse-reactions.git
            #- git clone https://github.com/discourse/discourse-canned-replies.git
            #- git clone https://github.com/jomaxro/discourse-plugin-site-setting-override.git
            #- git clone https://github.com/discourse/discourse-automation.git

I was able to get the

Aha. So apparently my test that was supposed to confirm the broken plugin confirms the fix.

1 Like

Same here haha, how’s that for timing! Thanks Falco.

1 Like

I will try to rebuild now and restore the removed plugins. Just in case, this is the rest of the plugins:

EDIT: Working with the 3 removed plugins back installed. Thanks team!

discourse 12436d05 Up to date
docker_manager e90c8f55 Up to date
discourse-adplugin bfd4438b Up to date
discourse-calendar adca3f65 Up to date
discourse-category-lockdown 2cf5f064 Up to date
discourse-chat-integration 75cf4136 Up to date
discourse-chatbot eb9c50ae Up to date
discourse-data-explorer 389b8e15 Up to date
discourse-docs 63bb4629 Up to date
discourse-encrypt 0f3c612b Up to date
discourse-formatting-toolbar d99f3c6d Up to date
discourse-gamification a842e183 Up to date
discourse-patreon 778829aa Up to date
discourse-policy b86d520c Up to date
discourse-pushover-notifications 30711ac7 Up to date
discourse-siwe 752687c8 Up to date
discourse-solved 2c1c64af Up to date
discourse-staff-alias 10ae5329 Up to date
discourse-telegram-notifications d9886998 Up to date
discourse-whos-online aeee51e4 Up to date
1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.