Redis Memory Keeps Increasing in Discourse 3.4.0.beta3

I recently updated Discourse from version 3.4.0.beta1 to 3.4.0.beta3. After the update, we are noticing that the forum’s memory usage is gradually increasing, causing the application to go down. When checking the server, we can see that the redis-server is consuming 95% of the memory.

We are running redis-cli flushall daily to temporarily fix the issue. The Discourse instance is hosted in Docker.

I tried to downgrade to the previous version, but it was throwing a few errors while rebuilding.

2 Likes

May I know how to fix this? Is it possible to downgrade to the previous version, or do you have any other suggestions?

I don’t know how to fix Redis, but I’ve red something similar at some point. Searching may help.

But downgrading is mostly a bad idea.

1 Like

You cannot downgrade Discourse.
Redis is not using 95% of the memory but 38.9%. Still a lot.

What does your Sidekiq queue look like? /sidekiq/queues

Please find the details of /sidekiq/queues

Let me know if you need any other details

1 Like

Are these emails jobs by any chance?

2 Likes

I doubt it. How can I check it?

Click on the queue

May I know if you’re talking about this queues section?

One more thing is under /sidekiq/scheduler/history I can find Jobs::Chat::EmailNotifications still running for a longer period

Yes, just click on the word “low”

1 Like

Please find the details below

There’s an identical issue here:

With maybe a fix:

Since you’re not the only one to experience this, that looks like a bug. :thinking:

Thanks, but I don’t think I can turn off the chat. I’ll try to figure out another way

As a temporary fix, I created a small bash script to clean up Redis memory and set it to run every day at 6 AM using a cron job.
Note: I’m saving the log in /home/ubuntu/logs. You can ignore it if you don’t need it.

#!/bin/bash

# Set log directory and filename
LOG_DIR="/home/ubuntu/logs"
LOG_FILE="$LOG_DIR/redis.cleanup.$(date +\%Y-\%m-\%d).log"

# Ensure the log directory exists
mkdir -p "$LOG_DIR"

# Log information about the current environment (host side)
echo "Running script at $(date)" >> "$LOG_FILE"

# Run the discourse launcher in the app and save output to the log file (host side)
echo "redis cleanup command" >> "$LOG_FILE"
docker exec app redis-cli flushall >> "$LOG_FILE" 2>&1

# Indicate that the script is done (host side) and exit
echo "Script completed successfully at $(date)" >> "$LOG_FILE"
exit 0
2 Likes

Update: Seems like it’s been auto-fixed. Now that I’m remembering, we had a similar issue last time when we updated the version, and the memory kept spiking, but it fixed itself after a while. It seems like it’s a bug.

2 Likes

Update: I stopped the app and started it again, I started face the same issue :slightly_smiling_face:

1 Like

122M enqueued jobs is definitely showing something’s wrong :thinking:

How many users do you have on your Discourse?
How many chat channels are there?
How many users are there in your TOP 3 biggest chat channels?

1 Like

3,4 chat groups have over 2 lakh members

I’m not familiar with “lakh” but google says it’s 100,000 :open_mouth: is that correct?

1 Like

Yes, the exact number is 227,254 members in a single group. We have similar members in 2 or 3 other groups

1 Like