Sidekiq is consuming too much memory, restarting

Fortunately, after increasing to 8GB, the problem has not recurred.

2 Likes

Thanks! I’ll try that approach then.

1 Like

It suddenly started appearing to me a couple of days ago.

Before I never saw Sidekiq is using too much memory errors in the logs. Now I see them a few times a day.

And today it happened exactly as @pelcami has mentioned: the VM just plainly became unresponsive. SSH didn’t work. No access on HTTP or HTTPS.

The VM is on Azure, so I pulled up the charts, and it showed network activity suddenly dropping down completely to zero (like a cliff) and CPU stuck at around 40% (probably completely using up one core). Restarting the VM solved it.

What I want to say is, it happened so suddenly that it was almost like one minute it was business as usual, normal network traffic, then the next minute it fell off a cliff and dropped to zero. Not gradual at all.

Something is happening to trigger an infinite loop in Sidekiq or something…

You can follow up with @tgxworld who is working the issue.

1 Like

Be sure to do a full rebuild we recently bumped ruby version in image

3 Likes

ok. I’ll rebuild and observe.

Just as a reminder to people: If you get a “Docker version too low” error during rebuild, read the following:

4 Likes

I’ve started seeing this after an update a few days ago @sam after the Amazon lifecycle patch I did a complete rebuild and now I’m seeing this everyday. Never happened before that.

I’m on v1.9.0.beta14 +200

When is the last time you did a

git pull
./launcher rebuild app

You could be on an older version of Ruby.

2 Likes

I did this for the build above beta14+200

I didn’t do a git pull, thought that rebuild does that.

EDIT: Okay I’m it again now, lets see how it goes.

That’s good to know. Also, I just noticed that vagrant is still on Rails 4.

No issues so far after a git pull + rebuild instead of just a rebuild. Somehow I thought a rebuild would also do a pull

this is the dashboard from what ? pls teach me .

Go to yourforumurl/sidekiq to see what sidekiq is doing

similarly

/logs to see the error log

/admin/upgrade#/processes to see processes

3 Likes

Just thought I’d share that updating to v2.0.0 beta1 a couple of weeks ago this problem started again.

Sidekiq is consuming too much memory (using: 513.13M)

I checked the dashboard and it’s currently using about 6M but somehow it ramps up to 500 about once a day I see this warning in the logs. Oddly it’s always around 4:20am to 4:50am in the mornings.

Almost certainly a scheduled daily task cc @sam

3 Likes

Also, if your server has enough total RAM, you can increase the memory allocation for Sidekiq.

Place this in the env: section of app.yml:

UNICORN_SIDEKIQ_MAX_RSS: 1000

The default maximum is, I believe, 500mb.

3 Likes

Closing the loop here, over the past 2 years we had one large change that reduced sidekiq memory. We made a change to the hosted v8 instance to ensure it cleans up used memory and runs a gc. This resulted in a 10-20% memory reduction in our hosting.

Except for that, yes, on mega busy multisites you need to adjust UNICORN_SIDEKIQ_MAX_RSS

2 Likes