In my day job I’m working in a Django app with Celery for queuing which
uses Redis as the backend. Every deploy, jobs go “poof!” and vanish. And
o we need to slot update/deploy in between the client’s long running jobs.
I’m replacing all that with a db based queue using a formal state
machine, and using celery just for the “run the task right now” phase.
Gets me persistent state, capacity to run jobs directly i.e. not using
celery as the queue, and the state’s not transient in Redis.
Thanks Cameron, I will focus on this one, since now we have the logging enabled again. So far nothing obvious, there are no errors or failed background jobs that I can see, and from the code logic I can’t see anything that would purposefully skip these emails. For that topic no one was sent an email for the OP which is intriguing, it’s like the job was never even enqueued in the first place. Will keep looking and let you know.
@cameron-simpson we’ve looked into this further, and the issue here is actually around our review queue system. For example, with Mental block, simple question - Python Help - Discussions on Python.org it was detected as “spam” by Akismet, which made it so the post required admin approval. When the admin approves the post, the mailing list mode emails are not enqueued. When we fix this bug, it should clear up the issue. I should be able to get to this in the next couple of weeks.
I merged this fix today @cameron-simpson , I will deploy python today too, then if you could let me know about any further instances of this happening that would be great However I think this should fix the issue: