After installing Discourse with “Docker version 20.10.5, build 55c4c88” on Centos 8 I have the site running but it always times out on DELETE calls to /draft.json.
I have Apache running and I tried both with apache reverse proxy and with HAProxy, behaviour is the same.
I don’t know if it is in any way connected but I find lots of this errors in unicorn.stdout.log:
==> ./shared/standalone/log/rails/unicorn.stdout.log <==
2021-03-18T10:02:25.138Z pid=108 tid=ocs ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Starting up 1 supervised sidekiqs
Loading Sidekiq in process id 119
2021-03-18T10:40:09.682Z pid=119 tid=orn ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
2021-03-18T10:40:09.684Z pid=119 tid=ohf ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
2021-03-18T10:40:09.685Z pid=119 tid=owv ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
2021-03-18T10:40:09.683Z pid=119 tid=oob ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
2021-03-18T10:40:09.683Z pid=119 tid=omr ERROR: Error fetching job: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Starting up 1 supervised sidekiqs
Loading Sidekiq in process id 121
53:M 18 Mar 2021 17:07:05.076 * 10 changes in 300 seconds. Saving...
53:M 18 Mar 2021 17:07:05.076 * Background saving started by pid 25018
25018:C 18 Mar 2021 17:07:05.126 * DB saved on disk
25018:C 18 Mar 2021 17:07:05.127 * RDB: 2 MB of memory used by copy-on-write
53:M 18 Mar 2021 17:07:05.177 * Background saving terminated with success
53:M 18 Mar 2021 17:12:06.097 * 10 changes in 300 seconds. Saving...
53:M 18 Mar 2021 17:12:06.098 * Background saving started by pid 25337
25337:C 18 Mar 2021 17:12:06.145 * DB saved on disk
25337:C 18 Mar 2021 17:12:06.145 * RDB: 2 MB of memory used by copy-on-write
53:M 18 Mar 2021 17:12:06.198 * Background saving terminated with success
I still can’t delete posts, invitations and void post modifications.
Seems like a problem on all the calls in HTTP DELETE method, may I say Discourse to use POST instead of DELETE? Seems like How can Discourse make POST instead of DELETE for "smart" CDN? but I’m not using any CDN, just a plain Virtual Server with Centos and Docker install with the instructions from Discourse site
Part of the Haproxy log with 408 errors on HTTP DELETE, what ca be “dropping” delete requests @codinghorror ? If these DELETE requests reach haproxy they should also reach the software inside Docker, can something in there be “responsible” for the problem?
This is production.log, I never see a delete request appeat, but I see everything else registered (opening post, modifying, saving, ecc.):
Completed 200 OK in 81ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 14861)
Started GET "/composer_messages?composer_action=edit&topic_id=10&post_id=13" for 176.243.226.205 at 2021-03-20 15:51:09 +0100
Processing by ComposerMessagesController#index as JSON
Parameters: {"composer_action"=>"edit", "topic_id"=>"10", "post_id"=>"13"}
Completed 200 OK in 1191ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 16940)
Started POST "/presence/publish" for 176.243.226.205 at 2021-03-20 15:51:14 +0100
Processing by Presence::PresencesController#handle_message as */*
Parameters: {"state"=>"editing", "topic_id"=>"10", "post_id"=>"13"}
Completed 200 OK in 9ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 3192)
Started PUT "/t/e-previsto-un-ritorno-di-freddo-nutro/10" for 176.243.226.205 at 2021-03-20 15:51:17 +0100
Processing by TopicsController#update as */*
Parameters: {"title"=>"E' previsto un ritorno di freddo, nutro?", "category_id"=>1, "slug"=>"e-previsto-un-ritorno-di-freddo-nutro", "topic_id"=>"10", "topic"=>{"title"=>"E' previsto un ritorno di freddo, nutro?", "category_id"=>1}}
Completed 200 OK in 19ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 5149)
Started POST "/presence/publish" for 176.243.226.205 at 2021-03-20 15:51:17 +0100
Processing by Presence::PresencesController#handle_message as */*
Parameters: {"state"=>"closed", "topic_id"=>"10"}
Completed 200 OK in 10ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 3159)
Started PUT "/posts/13" for 176.243.226.205 at 2021-03-20 15:51:17 +0100
Processing by PostsController#update as */*
Parameters: {"post"=>{"edit_reason"=>"", "cooked"=>"<p>post di test per vedere. ssss</p>", "raw"=>"post di test per vedere. ssss", "topic_id"=>"10", "raw_old"=>""}, "id"=>"13"}
Completed 200 OK in 3296ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 86638)
Started GET "/t/10.json" for 176.243.226.205 at 2021-03-20 15:51:20 +0100
Processing by TopicsController#show as JSON
Parameters: {"id"=>"10"}
Completed 200 OK in 143ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 52707)
Started GET "/draft.json?draft_key=topic_10" for 176.243.226.205 at 2021-03-20 15:51:27 +0100
Processing by DraftController#show as JSON
Parameters: {"draft_key"=>"topic_10"}
Completed 200 OK in 38ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 4317)
Started GET "/posts/13" for 176.243.226.205 at 2021-03-20 15:51:27 +0100
Processing by PostsController#show as JSON
Parameters: {"id"=>"13"}
Completed 200 OK in 16ms (Views: 0.1ms | ActiveRecord: 0.0ms | Allocations: 5026)
Started POST "/presence/publish" for 176.243.226.205 at 2021-03-20 15:51:29 +0100
Processing by Presence::PresencesController#handle_message as */*
Parameters: {"state"=>"editing", "topic_id"=>"10", "post_id"=>"13"}
Completed 200 OK in 20ms (Views: 0.3ms | ActiveRecord: 0.0ms | Allocations: 3304)
Started GET "/posts/13" for 176.243.226.205 at 2021-03-20 15:51:38 +0100
Processing by PostsController#show as JSON
Parameters: {"id"=>"13"}
Completed 200 OK in 121ms (Views: 0.2ms | ActiveRecord: 0.0ms | Allocations: 56890)
That means your reverse proxy is misconfigured and dropping the non GET/POST requests. That is quite common problem, and one of the reasons we ship with a pre-configured reverse proxy inside the container so people don’t need to fiddle with this.
If you remove the HAProxy and let the container itself listen on 80/443 the problem still happens?
Make sure that `gem install libv8 -v '8.4.255.0' --source 'https://rubygems.org/'` succeeds before bundling.
In Gemfile:
mini_racer was resolved to 0.3.1, which depends on
libv8
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer/parallel_installer.rb:220:in `handle_error'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer/parallel_installer.rb:102:in `call'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer/parallel_installer.rb:71:in `call'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:270:in `install_in_parallel'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:210:in `install'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:90:in `block in run'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/process_lock.rb:12:in `block in lock'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/process_lock.rb:9:in `open'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/process_lock.rb:9:in `lock'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:72:in `run'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/installer.rb:24:in `install'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli/install.rb:64:in `run'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:259:in `block in install'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/settings.rb:115:in `temporary'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:258:in `install'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:30:in `dispatch'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/cli.rb:24:in `start'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/exe/bundle:49:in `block in <top (required)>'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/lib/bundler/friendly_errors.rb:130:in `with_friendly_errors'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.2.15/exe/bundle:37:in `<top (required)>'
/usr/local/bin/bundle:23:in `load'
/usr/local/bin/bundle:23:in `<main>'
I, [2021-03-20T23:42:40.998961 #1] INFO -- : Terminating async processes
I, [2021-03-20T23:42:40.998991 #1] INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 66
I, [2021-03-20T23:42:40.999030 #1] INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 183
2021-03-20 23:42:40.999 UTC [66] LOG: received fast shutdown request
183:signal-handler (1616283760) Received SIGTERM scheduling shutdown...
2021-03-20 23:42:41.013 UTC [66] LOG: aborting any active transactions
2021-03-20 23:42:41.014 UTC [66] LOG: background worker "logical replication launcher" (PID 75) exited with exit code 1
2021-03-20 23:42:41.016 UTC [70] LOG: shutting down
183:M 20 Mar 2021 23:42:41.058 # User requested shutdown...
183:M 20 Mar 2021 23:42:41.058 * Saving the final RDB snapshot before exiting.
183:M 20 Mar 2021 23:42:41.061 * DB saved on disk
183:M 20 Mar 2021 23:42:41.061 # Redis is now ready to exit, bye bye...
2021-03-20 23:42:41.263 UTC [66] LOG: database system is shut down
FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && su discourse -c 'bundle install --deployment --retry 3 --jobs 4 --verbose --without test development' failed with return #<Process::Status: pid 348 exit 5>
Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"bundle_exec", "cmd"=>["su discourse -c 'bundle install --deployment --retry 3 --jobs 4 --verbose --without test development'"]}
9b0ca932a7dd52ccdd11e268910e3edcd8369c0c08f65e7f8686d542b9be473b
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.
➜ discourse git:(master) ✗
@Falco retryed official setup but with “tests-passed” instead of stable and it does not give the mini_racer gem error but still get the DELETE problem, as you can see in the video with tail in nginx log and browser with console opened
POSTS appear immediately in nginx log, DELETE only after error, which is weird
This is the screen recording:
This test has been done as I’ve been suggested by directly running discourse on port 80 without other softwares like HAPROXY in the middle
Oh I see. Since the problem can’t be reproduced in a new install, I’d say we can close this one. If the issue happens again, please open another topic with the reproduction steps.
Curl command gives bad csrf, and immediately appears in discourse internal nginx log but still deletes from interface don’t work and appear in log delayed about 35 seconds as in the video I sent.
If i put discourse.apicolturaitalianafb.it:8880 as the hostname in app.yml, rebuild and go to http://discourse.apicolturaitalianafb.it:8880 it works normally, but I can’t use it that way.
Having Apache running a production website I’ve tried putting behind haproxy as per documentation in this website and deletes from discourse stop working but your curl command works, tryed also Apache reverse proxy, curl works, delete from discourse no. Tryed configuring discourse to work un unix socket and reverse proxying to that but problem is the same.
For me the evidence is that reverse proxing doesn’t “kill” the deletes but javascripts in Discourse for some reason stop doing correct html deletes.
Is your fresh install you’ve tryied directly exposed?