Out of memory errors with custom plugin

Would someone provide instructions to configure memory allocation?

Given the sporadic kernel “out of memory”/dump messages, 3x increased swapping, and increasing application degraded response, it seems our app is sometimes running out of memory. Our system currently has 8GB mem and 2GB swap. Details below.

I have reviewed the instructions for adding more physical mem and swap at ("Cannot allocate memory" when upgrading) but have not been able to find details about how to allocate it.

When it comes to mem configs, we are using all defaults. The system recovers after a few minutes, but with increased usage - we think it’s time to get educated about how to improve performance. I am, however, unsure of where and how to configure this. Increase memory allocated for the docker instance, or for number of ruby unicorns (or both)?

I’m a sysadmin with no ruby and limited docker experience - so pointing me in direction of the config file, and syntax to use will help greatly.

Discourse 2.6.0 beta2
Ruby 2.3.1-2~ubuntu16.04.14
Ubuntu 16.04

How many unicorns are you running? It is set on your /var/discourse/containers/app.yml file.

Hi Rafael - In the “env” section of app.yml, I see we are configured to use 4.

1 Like

Did you re-run discourse-setup after you upped your RAM? It will tweak the memory settings accordingly. You can also read the comments in app.yml and adjust them.

1 Like

Getting OoM with just 4 unicorns is really weird. This should use around 2GB, leaving 6 GB for the PG and Redis.

You need to investigate what process is consuming all the memory during an OoM event, this is not normal.

3 Likes

Hi Rafael and team, my name is Serge and I work with mr. Happy Lee who just departed for long anticipated vacation so I’ll be working on this issue.

To add to description: The server was originally built with 8GB of RAM and 2GB swap. We did not upgrade it since then.

In the system log I can see evidence that Ruby is the process that consuming all the memory and causing kernel OoM

Killed process 2960 (ruby) total-vm:10031472kB, anon-rss:7438148kB, file-rss:0kB

I am not ruby expert so not sure how to see which process in ruby consuming that much of memory.

Any suggestions appreciated.

Thanks.

-Serge

Are you running any plugins in this Discourse install?

We do run Scheduled Digest custom plugin, however it also runs on our two other installations and those have no issues.

Can you link that plugin source code repository here?

1 Like

Hi Rafael,

the plugin was paid for development by our organization, so unfortunately I am not allowed to expose the source code here. Also since the plugin works on other instances without an issue, this makes me thinking of increasing server memory resources for this particular one. The machine is a VM so I can easily double the memory amount and see if this will fix it.

Okay, good luck!

Not much we can do to debug code we can’t see from our side. You may want to setup Prometheus exporter plugin for Discourse to follow metrics on your instance.

1 Like

Are the other instances also running Ruby 2.3.1-2~ubuntu16.04.14 ?

maybe nothing relevant but:

so this was clearly a Ruby bug. We tested across multiple versions of Ruby and determined that only rubies 2.3.x and 2.4.x were exhibiting the leak (apparently this was fixed in Ruby 2.5.0****).

And the discourse readme asks for [Ruby 2.6+] :roll_eyes:

1 Like

Thanks! I’ll post an update when it settles down.

Hi Benjamin, the other instances also running Ruby 2.3.1-2~ubuntu16.04.14 , I’ll test an update to see if it not breaks our Docker setup.

Ruby version is not relevant here

As long as you are using our official docker image you will use the correct supported version of discourse

2 Likes

This topic was automatically closed after 26 hours. New replies are no longer allowed.