Sidekiq is consuming too much memory (using: 522.12M)..., restarting

I see these errors increasingly in logs. Should I raise the Sidekiq RSS limit a bit in /var/discourse/containers/app.yml:

env:
  SIDEKIQ_MEMORY_KILLER_MAX_RSS: 700

Or is it normal and should be ignored?

 free -h
              total        used        free      shared  buff/cache   available
Mem:          5.8Gi       3.6Gi       227Mi       310Mi       2.0Gi       1.6Gi
Swap:         1.0Gi        42Mi       981Mi
1 Like

Here’s what worked for me.

After monitoring memory usage tweaking my settings, I was able to stop the frequent “Sidekiq is consuming too much memory…” /log messages.

I made this change in my /var/discourse/containers/app.yml:

  UNICORN_WORKERS: 4
  UNICORN_SIDEKIQ_MAX_RSS: 700

Originally, I had UNICORN_WORKERS set to 8, which was too aggressive and left very little headroom for Sidekiq, PostgreSQL, Redis, and the OS.

Dropping to 4 workers freed a significant amount of memory.

Then I raised the Sidekiq RSS limit from the default (~500 MB) to 700 MB, which allows Sidekiq a little more breathing room before it’s automatically restarted.

So far Sidekiq has stabilized, and memory usage now sits in a much safer zone, with just over 1 GB moved from used memory to cached and available memory.

Leaving this here if it proves helpful or as a hint of what to look at for anyone else with similar issues. Will be interesting to see if this holds and is more stable after a week of uptime, if it does, I will mark solved.

2 Likes

Ich hatte die Meldung neulich auch und habe es wie du angepasst (auf 1GB) und der Fehler kam nicht wieder :slight_smile:

1 Like

Confirmed that the changes worked. The last restart of Sidekiq was 11 Oct 12:48 pm.

Memory stats today:

The forums here has useful threads (linked above) that were helpful. Hopefully this also helps someone else facing similar issues.

What I’ve found is that my forum does not get anywhere near the amount of traffic required for 8 workers. Even 2 would have worked fine.

That said, on my server, memory seems to be the main/future bottleneck, but I plan to continue to run the VM at the same size. Since swap is on very fast NVMe in RAID 10, I will eventually in the future add zswap and update this thread in the years to come if/when traffic requires that.