Start several temporary sidekiq processes to clear queue backlog

(Dean Taylor) #1


There are substantial number of items “enqueued” in my “default” queue for sidekiq to process.
The server the installation is on has over 60GB of RAM and a number of cores not doing anything (15% CPU usage).

I would like to start some temporary sidekiq processes to get the queue down after an import.

What is the most correct command line to do this without the need to rebuild the discourse docker installation?


(Dean Taylor) #2

@sam could you please look at this? Hopefully it’s a simple command line. Thank you!

(Ed Ceaser) #3

I don’t know if this works in the Docker install, but I’m able to run sidekiq queues by hand by just doing

RAILS_ENV=production bundle exec sidekiq -L <logfile>

from the discourse root. It should be noted that the sidekiq binary is on my path from the gem installed in my ruby environment. I’m not sure if it is on the path in the docker install so you may have to go hunting for it.

(Kane York) #4

Insert this in your app.yml:

  # your email here
  # CHANGE ME to your hostname

Rebuild, wait for the queue, then change it back to 1.

(Dean Taylor) #5

@riking I want to avoid a rebuild I want to avoid loosing anything that’s there at the moment.
@eac I think you have pointed me in the right direction.

I ended up with something slightly different based on what starts the unicorn process.

However I the sidekiq log quicks fills with:

could not obtain a database connection within 5.000 seconds (waited 5.074 seconds)

Here is what I am running:

cd /var/docker
./launcher ssh app
cd /var/www/discourse
exec sudo -E -u discourse LD_PRELOAD=/usr/lib/ RAILS_ENV=production bundle exec sidekiq -L /shared/extra-side2.log

Looks like it starts an extra 25 processes.

Any ideas?

(Ed Ceaser) #6

Hmm… I am doing the same thing as you, except with a much smaller environment than what you’d get by doing sudo -E.

My env is PIDFILE=tmp/pids/ RAILS_ENV=production, and my commandline is bundle exec sidekiq -L log/sidekiq.log (running as the discourse user).

Its possible that you’re running into connection limitations to postgres or something. I’m not sure what the default configuration is in the all in one docker container, assuming thats what you are using. So I’d suggest trying to lower the sidekiq concurrency first, then see if that helps.

As far as the concurrency goes, I forgot to tell you: sidekiq by default reads from the config/sidekiq.yml conf file, where its concurrency is specified. I created a file with

:concurrency: 5

in order to start 5 workers. That is how I’m running sidekiq on my discourse install.

(Dean Taylor) #7

Thanks @eac

I ultimately just ended up passing the concurrency value on the command line like so for 5 connections:

exec sudo -E -u discourse LD_PRELOAD=/usr/lib/ RAILS_ENV=production bundle exec sidekiq -L /shared/extra-side2.log -c 5 

I started a number of these processes until I found the sweet spot where the DB didn’t cry about connections:

exception: FATAL:  remaining connection slots are reserved for non-replication superuser connections

phpBB 3 Importer (old)
(Ed Ceaser) #8

Cool. Glad that it worked out for you. I believe the default max_connections for postgres on a debian system (not sure about ubuntu’s packaging, or if the discourse docker configures it differently) is 100, so it makes sense that 25 workers + discourse unicorns would go over that limit if there is more than 1 db connection per process.

(Dean Taylor) #9

FYI config/sidekiq.yml doesn’t seem to exist nor any sidekiq.yml on the docker image, that’s why I passed it as a command line parameter.

(Ed Ceaser) #10

Yeah I had to add it myself. The contents are only that concurrency line that I posted above.

(Sam Saffron) #11

FYI, nothing important should be in the actual image, all important stuff is in the shared folder on the host.

This is happening cause the connection pool defaults to 8 connections and sidekiq 20 or so, discourse/discourse_defaults.conf at master · discourse/discourse · GitHub you can pass in

DISCOURSE_DB_POOL=25 to work around.

(Dean Taylor) #12

Thanks @sam for confirming there shouldn’t be anything important in the actual image.
My situation was slight different as an import was in process, shared folder for everything next time.

Thanks again.

(William Herry) #13

you also need to specify queue by -q option