优化大型 Discourse 多站点:数据库和 Sidekiq 瓶颈

I’m seeking some expert guidance on optimizing a Discourse multisite setup. I have a single web VM and a separate database VM on a major cloud provider. While both machines have decent specs, I’m finding that my system gets overwhelmed by a large volume of background jobs, which seems to be stressing the database.

My current app.yml configuration is:

  • UNICORN_WORKERS: 4
  • UNICORN_SIDEKIQS: 4
  • DISCOURSE_SIDEKIQ_WORKERS: 10
  • DISCOURSE_DB_POOL: 8

Based on my observations, the bottleneck isn’t hitting a hard connection limit, but rather the sheer volume of jobs competing for database resources at the same time. The queues in Sidekiq are constantly backing up, which makes the site feel slow, even for basic administrative tasks.

I’m looking for a generic approach to tune the system for stability and performance. Specifically, I’d like to understand the best practices for:

  • Sidekiq Concurrency: How should DISCOURSE_SIDEKIQ_WORKERS be sized in a multisite environment to handle a high job volume without stressing the database?
  • Queue Separation: Is it recommended to run separate Sidekiq processes to handle different queues (e.g., critical vs. low priority)? This would ensure that heavy jobs don’t block more urgent ones.

I’m not looking for a solution that requires a major architectural change or moving to a different web server at this time, as I want to keep the process as simple and low-risk as possible. I’m hoping to get advice on a safe and effective path forward.

Thanks!

1 个赞

Discourse should be able to adapt those based on your system resources. Note that it is safe to re-run ./discourse-setup script if you have recently increased your system resources. The script can adapt to the increased resources and adjust your .yml accordingly

Sounds like you need fewer?

I’m pretty sure that it already knows to prioritize the high priority jobs.

1 个赞