How to use prevent overlapping jobs? DistributedMutex?

Plugin newbie here. I’m trying to adapt a plugin which takes about an hour to run. Looking at /sidekiq, I see the job is running every 30 minutes. So before the first job is finished, another instance of the same job starts, creating duplicate results. How can I prevent this?

One option of course is to make the job run in its allowed 30 minutes, but there are other constraints, and I would rather let it run as long as it wants.

I tried this, but it doesn’t seem to prevent a second instance of the job from running:

DistributedMutex.synchronize("custom_digest", validity: 180.minutes)

I think that the summary email job does such a test that was added early this year. You might have a look at that.

I think I am doing the same as FEATURE: allow post process mutex to be held longer - discourse - Discourse Reviews and FIX: Post and Topic creation race-condition - discourse-code-review - Discourse Reviews, with the exception of using curly braces instead of do/end, but I don’t think that matters.

DistributedMutex.synchronize(“custom_digest”, validity: 180.minutes) {
do_stuff
}

Yet do_stuff is running multiple times concurrently, and well within the 180 minutes.

If the mutex is inside the execute block it just means it will block there waiting for it, meaning that you will see two jobs running, one actually running and another waiting for the mutex.

Maybe you want to check for it and return early if there is another instance running? Hard to guess knowing so little about the exact use case.

5 Likes