Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

After installing Discourse with “Docker version 20.10.5, build 55c4c88” on Centos 8 I have the site running but it always times out on DELETE calls to /draft.json.

I have Apache running and I tried both with apache reverse proxy and with HAProxy, behaviour is the same.

I don’t know if it is in any way connected but I find lots of this errors in unicorn.stdout.log:

==> ./shared/standalone/log/rails/unicorn.stdout.log <==

2021-03-18T10:02:25.138Z pid=108 tid=ocs ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Starting up 1 supervised sidekiqs

Loading Sidekiq in process id 119

2021-03-18T10:40:09.682Z pid=119 tid=orn ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

2021-03-18T10:40:09.684Z pid=119 tid=ohf ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

2021-03-18T10:40:09.685Z pid=119 tid=owv ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

2021-03-18T10:40:09.683Z pid=119 tid=oob ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

2021-03-18T10:40:09.683Z pid=119 tid=omr ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

Starting up 1 supervised sidekiqs

Loading Sidekiq in process id 121

Does Redis log any errors in /var/discourse/shared/standalone/log/var-log/redis?

2 Likes

This is the content of “current”:

53:M 18 Mar 2021 17:07:05.076 * 10 changes in 300 seconds. Saving...
53:M 18 Mar 2021 17:07:05.076 * Background saving started by pid 25018
25018:C 18 Mar 2021 17:07:05.126 * DB saved on disk
25018:C 18 Mar 2021 17:07:05.127 * RDB: 2 MB of memory used by copy-on-write
53:M 18 Mar 2021 17:07:05.177 * Background saving terminated with success
53:M 18 Mar 2021 17:12:06.097 * 10 changes in 300 seconds. Saving...
53:M 18 Mar 2021 17:12:06.098 * Background saving started by pid 25337
25337:C 18 Mar 2021 17:12:06.145 * DB saved on disk
25337:C 18 Mar 2021 17:12:06.145 * RDB: 2 MB of memory used by copy-on-write
53:M 18 Mar 2021 17:12:06.198 * Background saving terminated with success
1 Like

These logs look alright to me. Do you still experience issues while navigating your Discourse instance?

1 Like

I still can’t delete posts, invitations and void post modifications.

Seems like a problem on all the calls in HTTP DELETE method, may I say Discourse to use POST instead of DELETE? Seems like How can Discourse make POST instead of DELETE for "smart" CDN? but I’m not using any CDN, just a plain Virtual Server with Centos and Docker install with the instructions from Discourse site

1 Like

Part of the Haproxy log with 408 errors on HTTP DELETE, what ca be “dropping” delete requests @codinghorror ? If these DELETE requests reach haproxy they should also reach the software inside Docker, can something in there be “responsible” for the problem?

Mar 20 01:47:46 id22302.example.com haproxy[43043]: 176.243.177.232:61760 [20/Mar/2021:01:47:16.622] http-in discourse_socket/server3 0/0/0/30004/30034 408 71 - - CD-- 3/3/2/2/0 0/0 "DELETE /draft.json HTTP/1.1"

Mar 20 01:47:46 id22302.example.com haproxy[43043]: 176.243.177.232:61760 [20/Mar/2021:01:47:16.622] http-in discourse_socket/server3 0/0/0/30004/30034 408 71 - - CD-- 3/3/2/2/0 0/0 "DELETE /draft.json HTTP/1.1"

Mar 20 01:48:09 id22302.example.com haproxy[43043]: 176.243.177.232:61505 [20/Mar/2021:01:47:43.921] http-in discourse_socket/server3 0/0/0/25081/25081 200 396 - - ---- 2/2/1/1/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll HTTP/1.1"

Mar 20 01:48:09 id22302.example.com haproxy[43043]: 176.243.177.232:61505 [20/Mar/2021:01:47:43.921] http-in discourse_socket/server3 0/0/0/25081/25081 200 396 - - ---- 2/2/1/1/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll HTTP/1.1"

Mar 20 01:48:12 id22302.example.com haproxy[43043]: 176.243.177.232:61887 [20/Mar/2021:01:47:42.886] http-in discourse_socket/server3 0/0/0/30007/30037 408 71 - - CD-- 2/2/1/1/0 0/0 "DELETE /draft.json HTTP/1.1"

Mar 20 01:48:12 id22302.example.com haproxy[43043]: 176.243.177.232:61887 [20/Mar/2021:01:47:42.886] http-in discourse_socket/server3 0/0/0/30007/30037 408 71 - - CD-- 2/2/1/1/0 0/0 "DELETE /draft.json HTTP/1.1"

Mar 20 01:48:25 id22302.example.com haproxy[43043]: 176.243.177.232:62083 [20/Mar/2021:01:48:25.863] http-in discourse_socket/server3 0/0/0/122/122 200 404 - - ---- 2/2/1/1/0 0/0 "POST /topics/timings HTTP/1.1"

Mar 20 01:48:25 id22302.example.com haproxy[43043]: 176.243.177.232:62083 [20/Mar/2021:01:48:25.863] http-in discourse_socket/server3 0/0/0/122/122 200 404 - - ---- 2/2/1/1/0 0/0 "POST /topics/timings HTTP/1.1"

Mar 20 01:48:34 id22302.example.com haproxy[43043]: 176.243.177.232:61505 [20/Mar/2021:01:48:09.193] http-in discourse_socket/server3 0/0/0/25044/25044 200 396 - - ---- 2/2/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll HTTP/1.1"

Mar 20 01:48:34 id22302.example.com haproxy[43043]: 176.243.177.232:61505 [20/Mar/2021:01:48:09.193] http-in discourse_socket/server3 0/0/0/25044/25044 200 396 - - ---- 2/2/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll HTTP/1.1"

Mar 20 01:48:59 id22302.example.com haproxy[43043]: 176.243.177.232:61505 [20/Mar/2021:01:48:34.487] http-in discourse_socket/server3 0/0/0/25087/25087 200 396 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll HTTP/1.1"

Mar 20 01:48:59 id22302.example.com haproxy[43043]: 176.243.177.232:61505 [20/Mar/2021:01:48:34.487] http-in discourse_socket/server3 0/0/0/25087/25087 200 396 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll HTTP/1.1"

Mar 20 01:49:35 id22302.example.com haproxy[43043]: 176.243.177.232:62413 [20/Mar/2021:01:49:35.637] http-in discourse_socket/server3 0/0/0/87/87 200 417 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll?dlp=t HTTP/1.1"

Mar 20 01:49:35 id22302.example.com haproxy[43043]: 176.243.177.232:62413 [20/Mar/2021:01:49:35.637] http-in discourse_socket/server3 0/0/0/87/87 200 417 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll?dlp=t HTTP/1.1"

Mar 20 01:50:36 id22302.example.com haproxy[43043]: 176.243.177.232:62714 [20/Mar/2021:01:50:36.568] http-in discourse_socket/server3 0/0/0/81/81 200 417 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll?dlp=t HTTP/1.1"

Mar 20 01:50:36 id22302.example.com haproxy[43043]: 176.243.177.232:62714 [20/Mar/2021:01:50:36.568] http-in discourse_socket/server3 0/0/0/81/81 200 417 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll?dlp=t HTTP/1.1"

Mar 20 01:51:37 id22302.example.com haproxy[43043]: 176.243.177.232:62993 [20/Mar/2021:01:51:37.541] http-in discourse_socket/server3 0/0/0/78/78 200 417 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll?dlp=t HTTP/1.1"

Mar 20 01:51:37 id22302.example.com haproxy[43043]: 176.243.177.232:62993 [20/Mar/2021:01:51:37.541] http-in discourse_socket/server3 0/0/0/78/78 200 417 - - ---- 1/1/0/0/0 0/0 "POST /message-bus/742b0b89384944a7b6376e2d6978889d/poll?dlp=t HTTP/1.1"

This is production.log, I never see a delete request appeat, but I see everything else registered (opening post, modifying, saving, ecc.):

Completed 200 OK in 81ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 14861)

Started GET "/composer_messages?composer_action=edit&topic_id=10&post_id=13" for 176.243.226.205 at 2021-03-20 15:51:09 +0100

Processing by ComposerMessagesController#index as JSON

Parameters: {"composer_action"=>"edit", "topic_id"=>"10", "post_id"=>"13"}

Completed 200 OK in 1191ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 16940)

Started POST "/presence/publish" for 176.243.226.205 at 2021-03-20 15:51:14 +0100

Processing by Presence::PresencesController#handle_message as */*

Parameters: {"state"=>"editing", "topic_id"=>"10", "post_id"=>"13"}

Completed 200 OK in 9ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 3192)

Started PUT "/t/e-previsto-un-ritorno-di-freddo-nutro/10" for 176.243.226.205 at 2021-03-20 15:51:17 +0100

Processing by TopicsController#update as */*

Parameters: {"title"=>"E' previsto un ritorno di freddo, nutro?", "category_id"=>1, "slug"=>"e-previsto-un-ritorno-di-freddo-nutro", "topic_id"=>"10", "topic"=>{"title"=>"E' previsto un ritorno di freddo, nutro?", "category_id"=>1}}

Completed 200 OK in 19ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 5149)

Started POST "/presence/publish" for 176.243.226.205 at 2021-03-20 15:51:17 +0100

Processing by Presence::PresencesController#handle_message as */*

Parameters: {"state"=>"closed", "topic_id"=>"10"}

Completed 200 OK in 10ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 3159)

Started PUT "/posts/13" for 176.243.226.205 at 2021-03-20 15:51:17 +0100

Processing by PostsController#update as */*

Parameters: {"post"=>{"edit_reason"=>"", "cooked"=>"<p>post di test per vedere. ssss</p>", "raw"=>"post di test per vedere. ssss", "topic_id"=>"10", "raw_old"=>""}, "id"=>"13"}

Completed 200 OK in 3296ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 86638)

Started GET "/t/10.json" for 176.243.226.205 at 2021-03-20 15:51:20 +0100

Processing by TopicsController#show as JSON

Parameters: {"id"=>"10"}

Completed 200 OK in 143ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 52707)

Started GET "/draft.json?draft_key=topic_10" for 176.243.226.205 at 2021-03-20 15:51:27 +0100

Processing by DraftController#show as JSON

Parameters: {"draft_key"=>"topic_10"}

Completed 200 OK in 38ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 4317)

Started GET "/posts/13" for 176.243.226.205 at 2021-03-20 15:51:27 +0100

Processing by PostsController#show as JSON

Parameters: {"id"=>"13"}

Completed 200 OK in 16ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 5026)

Started POST "/presence/publish" for 176.243.226.205 at 2021-03-20 15:51:29 +0100

Processing by Presence::PresencesController#handle_message as */*

Parameters: {"state"=>"editing", "topic_id"=>"10", "post_id"=>"13"}

Completed 200 OK in 20ms (Views: 0.3ms | ActiveRecord: 0.0ms | Allocations: 3304)

Started GET "/posts/13" for 176.243.226.205 at 2021-03-20 15:51:38 +0100

Processing by PostsController#show as JSON

Parameters: {"id"=>"13"}

Completed 200 OK in 121ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 56890)

Also PUMA log does not show DELETE requests

2021-03-20 16:01:38 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:01:38 +0100] "GET /t/e-previsto-un-ritorno-di-freddo-nutro/10 HTTP/1.1" 200 - 1.0685

2021-03-20 16:01:39 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:01:39 +0100] "GET /uploads/default/optimized/1X/_129430568242d1b7f853bb13ebea28b3f6af4e7_2_32x32.png HTTP/1.1" 200 11945 0.0119

2021-03-20 16:01:39 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021 16:01:39] "POST /message-bus/557c51e14e57450c8c76ab216af0a279/poll HTTP/1.1" HIJACKED -1 0.0956

2021-03-20 16:01:42 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:01:42 +0100] "POST /message-bus/557c51e14e57450c8c76ab216af0a279/poll HTTP/1.1" 200 - 0.0128

2021-03-20 16:01:43 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021 16:01:43] "POST /message-bus/557c51e14e57450c8c76ab216af0a279/poll HTTP/1.1" HIJACKED -1 0.0886

2021-03-20 16:01:54 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:01:54 +0100] "GET /t/e-previsto-un-ritorno-di-freddo-nutro/10 HTTP/1.1" 200 - 0.1436

2021-03-20 16:01:55 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:01:55 +0100] "GET /uploads/default/optimized/1X/_129430568242d1b7f853bb13ebea28b3f6af4e7_2_32x32.png HTTP/1.1" 200 11945 0.0383

2021-03-20 16:01:56 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021 16:01:56] "POST /message-bus/1bddd2f9aaf54ec5a589bc74c9c8a409/poll HTTP/1.1" HIJACKED -1 0.0453

2021-03-20 16:01:59 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:01:59 +0100] "POST /message-bus/1bddd2f9aaf54ec5a589bc74c9c8a409/poll HTTP/1.1" 200 - 0.0129

2021-03-20 16:01:59 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021 16:01:59] "POST /message-bus/1bddd2f9aaf54ec5a589bc74c9c8a409/poll HTTP/1.1" HIJACKED -1 0.0096

2021-03-20 16:02:07 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:02:07 +0100] "GET /draft.json?draft_key=topic_10 HTTP/1.1" 200 - 0.0131

2021-03-20 16:02:08 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:02:08 +0100] "GET /posts/13 HTTP/1.1" 200 - 0.3559

2021-03-20 16:02:08 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:02:08 +0100] "GET /composer_messages?composer_action=edit&topic_id=10&post_id=13 HTTP/1.1" 200 - 0.2087

2021-03-20 16:02:12 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021:16:02:12 +0100] "POST /presence/publish HTTP/1.1" 200 - 0.0155

2021-03-20 16:02:24 +0100 : |Puma| 176.243.226.205 - - [20/Mar/2021 16:02:24] "POST /message-bus/1bddd2f9aaf54ec5a589bc74c9c8a409/

That means your reverse proxy is misconfigured and dropping the non GET/POST requests. That is quite common problem, and one of the reasons we ship with a pre-configured reverse proxy inside the container so people don’t need to fiddle with this.

If you remove the HAProxy and let the container itself listen on 80/443 the problem still happens?

2 Likes

@Falco Just tried, problem still happens. Can it be Puma itself as web server that drops DELETE requests?

We don’t use Puma in production if you install it following Discourse official Standard Installation.

We use the DELETE/PUT verbs across the app for every single install, and don’t see any problems, so I’d guess it’s something specific to your install.

Long shot, but do you have a WAF of some sort fronting this domain @Andrea_Giovacchini ?

2 Likes

I’ve tried an alternate install after trying the official one that had the same problem

Now I’ve tryed again the official setup (https://github.com/discourse/discourse/blob/master/docs/INSTALL-cloud.md) but if fails on “./launcher rebuild app” with this message (two days ago it was not failing, but also affected by the DELETE problem)

Make sure that `gem install libv8 -v '8.4.255.0' --source 'https://rubygems.org/'` succeeds before bundling.

In Gemfile:
  mini_racer was resolved to 0.3.1, which depends on
    libv8

/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer/parallel_installer.rb:220:in `handle_error'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer/parallel_installer.rb:102:in `call'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer/parallel_installer.rb:71:in `call'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:270:in `install_in_parallel'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:210:in `install'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:90:in `block in run'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/process_lock.rb:12:in `block in lock'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/process_lock.rb:9:in `open'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/process_lock.rb:9:in `lock'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:72:in `run'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:24:in `install'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli/install.rb:64:in `run'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:259:in `block in install'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/settings.rb:115:in `temporary'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:258:in `install'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:30:in `dispatch'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:24:in `start'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/exe/bundle:49:in `block in <top (required)>'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/friendly_errors.rb:130:in `with_friendly_errors'
  /usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/exe/bundle:37:in `<top (required)>'
  /usr/local/bin/bundle:23:in `load'
  /usr/local/bin/bundle:23:in `<main>'

I, [2021-03-20T23:42:40.998961 #1]  INFO -- : Terminating async processes
I, [2021-03-20T23:42:40.998991 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 66
I, [2021-03-20T23:42:40.999030 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 183
2021-03-20 23:42:40.999 UTC [66] LOG:  received fast shutdown request
183:signal-handler (1616283760) Received SIGTERM scheduling shutdown...
2021-03-20 23:42:41.013 UTC [66] LOG:  aborting any active transactions
2021-03-20 23:42:41.014 UTC [66] LOG:  background worker "logical replication launcher" (PID 75) exited with exit code 1
2021-03-20 23:42:41.016 UTC [70] LOG:  shutting down
183:M 20 Mar 2021 23:42:41.058 # User requested shutdown...
183:M 20 Mar 2021 23:42:41.058 * Saving the final RDB snapshot before exiting.
183:M 20 Mar 2021 23:42:41.061 * DB saved on disk
183:M 20 Mar 2021 23:42:41.061 # Redis is now ready to exit, bye bye...
2021-03-20 23:42:41.263 UTC [66] LOG:  database system is shut down


FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && su discourse -c 'bundle install --deployment --retry 3 --jobs 4 --verbose --without test development' failed with return #<Process::Status: pid 348 exit 5>
Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"bundle_exec", "cmd"=>["su discourse -c 'bundle install --deployment --retry 3 --jobs 4 --verbose --without test development'"]}
9b0ca932a7dd52ccdd11e268910e3edcd8369c0c08f65e7f8686d542b9be473b
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.
➜  discourse git:(master) ✗

@Falco retryed official setup but with “tests-passed” instead of stable and it does not give the mini_racer gem error but still get the DELETE problem, as you can see in the video with tail in nginx log and browser with console opened

POSTS appear immediately in nginx log, DELETE only after error, which is weird

This is the screen recording:

This test has been done as I’ve been suggested by directly running discourse on port 80 without other softwares like HAPROXY in the middle

2 Likes

Cool, I will take a look and try to repro this tomorrow, thanks for the report.

3 Likes

Yes “stable” is unfortunately broken at the moment due to a regression in bundler.

We will work on a fix over the next day

2 Likes

@Falco have you reproduced the problem?

1 Like

Hey @Andrea_Giovacchini,

First, there are two different issues:

  1. Is the blocked DELETE requests

  2. Is the broken rebuild for people on “stable” branch.

2 was fixed today via Can't rebuild current stable? - #6 by david

As for 1 here are my findings:

I have a brand new install of Discourse running, and while sending a test DELETE command I get:

curl -X DELETE https://falcoland.falco.dev/draft.json
["BAD CSRF"]%                                                

Which is expected as that command must also come with a valid cookie for the user session.

Meanwhile, on your site:

curl -X DELETE http://apicolturaitalianawebinar.it/draft.json -v
*   Trying 213.229.86.117:80...
* Connected to apicolturaitalianawebinar.it (213.229.86.117) port 80 (#0)
> DELETE /draft.json HTTP/1.1
> Host: apicolturaitalianawebinar.it
> User-Agent: curl/7.75.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Mon, 22 Mar 2021 18:51:55 GMT
< Server: Apache/2.4.37 (centos) OpenSSL/1.1.1g Phusion_Passenger/6.0.4 mod_perl/2.0.11 Perl/v5.26.3
< X-Runtime: 0.001594
< X-Request-Id: ebe50251-0f06-4b71-acd0-887fb2666abb
< X-Powered-By: Phusion Passenger 6.0.4
< Content-Length: 1351
< Status: 404 Not Found
< Content-Type: text/html; charset=UTF-8
< 

Gives me a 404 from a Phusion Passenger. That doesn’t appear to be a normal Discourse installation :thinking:.

I shut down discourse because with the delete problem it was unusable, now there is another application responding at that url

Oh I see. Since the problem can’t be reproduced in a new install, I’d say we can close this one. If the issue happens again, please open another topic with the reproduction steps.

2 Likes

@Falco I’ve putted up again on http://discourse.apicolturaitalianafb.it/ and you can try, standard install following docs here.

Curl command gives bad csrf, and immediately appears in discourse internal nginx log but still deletes from interface don’t work and appear in log delayed about 35 seconds as in the video I sent.

If i put discourse.apicolturaitalianafb.it:8880 as the hostname in app.yml, rebuild and go to http://discourse.apicolturaitalianafb.it:8880 it works normally, but I can’t use it that way.

Having Apache running a production website I’ve tried putting behind haproxy as per documentation in this website and deletes from discourse stop working but your curl command works, tryed also Apache reverse proxy, curl works, delete from discourse no. Tryed configuring discourse to work un unix socket and reverse proxying to that but problem is the same.

For me the evidence is that reverse proxing doesn’t “kill” the deletes but javascripts in Discourse for some reason stop doing correct html deletes.

Is your fresh install you’ve tryied directly exposed?