Click on the ‘Busy’ tab at the top of the /sidekiq screen and you’ll see your queues and the jobs inside them. Each job also shows how long it’s been active, which is a great indicator of problems.
I assume your Critical queue jobs are getting handled first, but let’s confirm that these are indeed the jobs that are causing the slowness.
Looks like you don’t have any seriously slow tasks there… What’s your CPU load like during the processing? If it’s low you can try increasing UNICORN_SIDEKIQS. It’s currently set to 1 for you, adding more will add 5 job processors at a time.
In contrast, the UNICORN_WORKERS setting affects the number of concurrent web requests that can be handled - this is not related to Sidekiq and increasing the value won’t help solve this issue.
Do you see anything useful in the logs? They’re located in /var/discourse/shared/standalone/log
Thanks again @bartv, I’ve added UNICORN_SIDEKIQS=5 to the app.yml and run ./launcher restart app now looking back at the sidekiq dashboard it’s still only processing around 10 per second.
Have I got UNICORN_SIDEKIQS=5 correct or should it be UNICORN_SIDEKIQS: 5
The logs are showing thousands of entries for:
Started GET "/sidekiq/stats" for 86.1.10.29 at 2018-06-13 10:34:22 +0000
It should be UNICORN_SIDEKIQS: 5 - the same formatting as any other setting in app.yml. You can verify this by going to the busy tab in Sidekiq again - the number of processes should match the value you entered here.
And a tip: to quickly update these settings you don’t need to do a full rebuild; just do this:
What does the Redis log say? I had a similar issue with Redis running out of memory; the rebuild log provided the solution to this:
186:M 01 Jun 11:02:31.042 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
186:M 01 Jun 11:02:31.042 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
Perform these commands then restart your Discourse:
sysctl vm.overcommit_memory=1
echo never > /sys/kernel/mm/transparent_hugepage/enabled
Note that you’ll still need to make these persistent! (See the quoted text above to learn how)