Site goes down at the same time every day in memory-constrained environment


I’m not sure what this could be, and I think running Discourse on 1GB of memory plus 1GB of swap is probably below minimum spec at this point so I’d understand if this is outright unsupported, but I’m having a weird issue where my site goes down at exactly the same time every day (at around 2:48 PST) for five minutes or so. Trying to hit it in the meantime sometimes throws 429’s.

sshing into the server and eyeballing top when this happens shows that about half of swap has been released (suggesting that maybe something else is running on the server and elbowing out Discourse, but I don’t know what it could be, there’s nothing in cron for that time of day), and most of the CPU is consumed by a single postmaster process until it runs for a little over two minutes, is killed, and then everything gradually goes back to normal.

Any idea what’s up here? Been happening for the past several releases at least.


I don’t think it could be a crawler given that I don’t know any crawlers that are that punctual…

Per previous discussions it is almost certainly the database backup blowing out all your memory. Increase swap to 2GB.

1 Like

Will do, thanks! Probably want to update your install docs to reflect that if you haven’t already.

This is a relatively new problem cc @sam

1 Like

For the record, I just checked that I have automatic backups disabled, unless it’s something else that e.g. redis does every day and isn’t exposed to the Discourse admin config. But I’ve increased swap anyway and we’ll see how tomorrow goes!

Hi Alex,

429 is a very odd response code for the site to be sending due to memory problems. That’s normally because someone on the same IP address is doing something untoward. Is there any chance someone else on the same connection (a person or machine in your office, for example) is running some sort of scraping batch job at the same time?

1 Like

Those might be a red herring – I’m running the container behind an nginx on the host so I can keep some old LAMP apps up on a couple other subdomains, and I think it just gets hit a lot by people trying to access the site when it goes down like this, with that being the result.

And I know I probably shouldn’t be doing that at or below minimum spec, so you’re free to disavow, but those haven’t changed in a dog’s age and this has only started happening with the most recent Discourse versions, always at the same time of day.

Well one thing I would check is that NGINX inside the container (that has the rate limiting template enabled) does not think every single user has the same IP address.

1 Like

I don’t think the nginx inside the container has diverged from stock at all, I use the provided launcher script for all the docker stuff.

This is what the host nginx config for the site looks like if you’re curious, but I don’t think this is the problem:

root@selectbutton:/etc/nginx/sites-enabled# cat discourse

server {

listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

location /basic_status {
    stub_status on;
    access_log off;
    deny all;

location / {
    root /var/www/discourse-root;
    try_files $uri @discourse;

location @discourse {
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass https://localhost:4443;

server {
listen 80;
return 301 https://$host$request_uri;

I’m on a server with 2 GB of ram and I’m seeing the same behavior as of a few days ago. Seems to happen around backup time.

Well you would be using this:

And if I had to bet a :money_with_wings: you do not have a template setting:

real_ip_header X-Forwarded-For;

The internal NGINX is going to have to trust that header.


Ah, OK, good to know. So I should add that to the host nginx or the container nginx (which is to say, that to: block in the app.yml?)

You need to run a replace command in the internal NGINX.

After making the changes, look at the actual file on disk to confirm that you got what you wanted.

A trick I use is:

  1. ./launcher enter app
  2. cd /etc/nginx/conf.d/
  3. edit the discourse.conf file
  4. sv restart nginx
  5. see that my desired effect was achieved
  6. turn that change into a yml replace command
  7. rebuild
  8. confirm file is good

OK, will test if it goes down again tomorrow despite increasing swap. Any other ideas if this turns out not to be it? The timeliness still makes me quite skeptical.

Yeah, the strict periodicity (in the absence of scheduled backups) is a very confusing aspect to this. Seems like a heck of a big clue, if you can figure out what’s running at that exact time, I think you’ll be 90% done.


Same problem for me :frowning:
I’m tired of rebuild after every crash.

Check your /sidekiq and see if anything is running at that time.

What also might help is to start psql to take a look at what is going on in your database at that time.

SELECT pid, age(query_start, clock_timestamp()), usename, query FROM pg_stat_activity;


Sidekiq is idle :thinking:

And of course it seems not to have happened today after occurring several days in a row at this exact time. Huh. swap usage hasn’t ticked over 900M either. Oh well! Stay tuned I guess.