High Memory usage without traffic?

So I’m testing out Discourse as a possible destination for our existing forum and I’m trying to figure out the requirements.

Currently I’m running the Discourse droplet on a Digitalocean node with 4vCPUs and 8GB ram.

With the imported vbulletin site running here with no traffic and no activity, the system starts out using about 75% of that 8gb ram and over a few days goes up to 100% then stops responding completely.

This confuses me since the minimum required seems to be a lot less than this.

( I have rebuilt the container, I have checked and cleared sidekiq tasks, still high usage)

Anyone got any tips ro should I be looking at a monster RAM setup just to keep the forum up?

How many posts were imported?

The system may be rebaking posts and re-sizing images, which could use a lot of resources even if you have no users. You can look at /sidekiq to see if there are a bunch of jobs queued and/or running. Also, htop may give you some hints about what’s running.

3 Likes

About 240,000 posts.

Import was about 5 weeks ago, been through 5 app rebuilds since that seems to be what solves the memory issue once the container goes into 100% mem unresponsiveness.

Cleared all tasks in sidekiq as mentioned and the usage is still at 75%

The memory graph since I rebuilt the server yesterday:

*Ram: 8GB

CPU

Traffic

Sidekiq

To me, this looks like it is slowly leaking memory to it’s death in a few days… (which has been the observed behavior so far.

1 Like

After importing, it’s always a good idea for database performance to create a backup and restore it into the same instance.

Is that memory graph including or excluding cache? (i.e. what does the output of free -m look like?)

Any plugins?

1 Like

## Plugins go here
## see https://meta.discourse.org/t/19157 for details
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-data-explorer.git
          - git clone https://github.com/discourse/discourse-solved.git
          - git clone https://github.com/discourse/discourse-cakeday.git
          - git clone https://github.com/discourse/discourse-spoiler-alert.git
          - git clone https://github.com/discourse/discourse-user-card-badges.git
          - git clone https://github.com/discourse/discourse-adplugin.git

Good idea. def something I’ll try

So that rendered the site unresponsive… (Backed up… restored backup and then rebooted)

Ram usage up from 6GB to 7GB and site not responding.

There is almost 5GB being used by Redis, so that leaves Discourse with little to work with, specially considering how many unicorns you are running.

If your sidekiq queue is clean, try cleaning Redis since it may have too much garbage from the import:

./launcher enter app
redis-cli flushall
1 Like

Gotcha I’ll try the redis command.

The unicorn worker issue was one that I checked on fairly early. I changed both the ram usage for db_shared_buffers and also set the unicorn workers to 3.

The unicorn workers setting seems to have little to no effect on the number of workers that actually run though

From my app.yml file

  ## How many concurrent web requests are supported? Depends on memory and CPU cores.
  ## will be set automatically by bootstrap based on detected CPUs, or you can override
  UNICORN_WORKERS: 3

That flushall command did wonders… down to 2gb used… we’ll see if this keeps now.

The worrysome part was that stuff just kept growing before. Hopefully this allows the app to self manage better.

Anyways… so the import keeps things permanently in redis? seems odd but I have no clue how redis works so

Thanks a load for the help

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.