We bumped our RAM and rebuilt the database, but problem persists.
I’d like to ask if devs are aware of this problem… Thx!
Is there any quick fix I can do as admin? Longer timeout maybe?
Not sure. It sure seems like a longer timeout would help. It happens way more often in heavily populated threads, so it feels like there’s find sampling or scanning that takes too long.
If I can figure out how to implement this, will report back. I would double it.
Yep, problem persists with 2.5.0.beta2
Did you run discourse setup after you changed the ram? There are settings that need to be updated to take Ashanti /advantage of the ram.
Not sure, as the implementation rests with another person. But I’ll mention to him that this needs to be done to complete the process. Thanks so much!
I get a lot of 502 errors during moving posts.
Do you have a plan to improve this scenario?
Hello, have you been able to find it? It would be great to parameterize the timeout into app.yml ENV or discourse site settings for us with low memory…
Maybe a dumb question?
When you move a lot of posts like this is this process managed by Sidekiq?
Sorry, did not search the code…
Took a quick peek at the Ruby code base and yes when the move posts function is called, the jobs are queued with the
Not being a discourse developer, it would appear to a casual code-browser, observer that issues related to delays, errors or timeouts related to moving posts would be directly related to
sidekiq performance and configuration.
I tried studying how discourse uses sidekiq at the system level a number of days ago, but I was not successful in finding the “cliff notes” version for dummies.
So, I went to the sidekiq web site to try to get a better feel for what is going on under the cover and noticed there were three different offerings, got confused and moved on because I could not understand, within my short attention span and need for immediate gradification, what version of sidekiq discourse uses, what are the exact features and switches which can be configured…
Being a novice in this area, I’m interested to know exactly the
sidekiq architecture, features, switches, environmentals, available in
discourse … but so far I “still haven’t found, what I’m looking for” - sung to the tune of our fav U2 song.
All answers spring from curiosity…
On a tip from a leader in another thread, we turned off all plug-ins except “who’s online” and now, no issues with moves lately.
So cautious optimism here. Will update if things change.
Thanks to everyone who has lent assistance on this issue!
Which plugins specifically did you disable?
Ideally, I would have done each on their own to check which one(s) were causing problems, but I didn’t expect it to work so switched them all off in one go.
Very likely babble, I am guessing it has hooks that fire on post move.
Good to know. Thanks. And props to @featheredtoast for the fix.
My community recently started experiencing the 502 error issue when moving posts, especially between large threads. I had no custom plug-ins installed. Following the advice from another Discourse thread, I increased the
unicorn_workers to 10 and
db_shared_buffers to 4096MB, but that did not ameliorate the situation. Below is our forum’s
./discourse-doctor log. Hoping to get some pointers. Thank you!
==================== DOCKER INFO ==================== DOCKER VERSION: Docker version 17.10.0-ce, build f4ffd25 DOCKER PROCESSES (docker ps -a) CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ddfb2222fd64 local_discourse/app "/sbin/boot" 10 days ago Up 10 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp app ddfb2222fd64 local_discourse/app "/sbin/boot" 10 days ago Up 10 days 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp app Discourse container app is running ==================== PLUGINS ==================== - git clone https://github.com/discourse/docker_manager.git - git clone https://github.com/discourse/discourse-solved.git No non-official plugins detected. See https://github.com/discourse/discourse/blob/master/lib/plugin/metadata.rb for the official list. ======================================== Discourse 2.6.0.beta2 Discourse version at localhost: Discourse 2.6.0.beta2 ==================== MEMORY INFORMATION ==================== RAM (MB): 16434 total used free shared buff/cache available Mem: 16048 5605 919 4255 9523 5850 Swap: 2047 437 1610 ==================== DISK SPACE CHECK ==================== ---------- OS Disk Space ---------- Filesystem Size Used Avail Use% Mounted on /dev/disk/by-label/DOROOT 315G 132G 168G 45% / /dev/disk/by-label/DOROOT 315G 132G 168G 45% /var/lib/docker/aufs /dev/disk/by-label/DOROOT 315G 132G 168G 45% / /dev/disk/by-label/DOROOT 315G 132G 168G 45% /var/lib/docker/plugins /dev/disk/by-label/DOROOT 315G 132G 168G 45% / ---------- Container Disk Space ---------- unknown shorthand flag: 'w' in -w See 'docker exec --help'. ==================== DISK INFORMATION ==================== Disk /dev/vda: 320 GiB, 343597383680 bytes, 671088640 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 29B528BA-16C4-402E-BEE9-53555C8B6F10 Device Start End Sectors Size Type /dev/vda1 2048 671086591 671084544 320G Linux filesystem ==================== END DISK INFORMATION ====================
Hi, I encounter the same issue. Can’t split megatopics because of this.
Also tried in safe-mode, didn’t change anything.
No issue in my development Discourse installation (same version 2.6.0.beta2) though.
And nothing in the logs.
I’m getting this 502 errors for an year ;(
I don’t think we’ve asked you: what plugins are you running?
I disable all plugins to check is it plugin related bug. It looks like it repeated stable with long threads.