I originally made this thread about the issue I’m having but it wound up being sidetracked about IP header forwarding from a host nginx to the container (which was also a problem in my environment that I was happy to get resolved, though it seems like it was only exacerbating the issue, not the root cause):
Basically, I’m running Discourse on minimum spec (1GB of memory plus 2GB of swap) and the site would go down or very nearly go down, chugging a lot about once a day. If I looked at top while this was happening I noticed that a bit of swap had been released (like 200-300M) and inevitably a single postmaster process would be running for about two minutes, after which it would be killed and things would gradually return to normal.
I’ve just noticed that this happens when the container nginx logs rotate (i.e., access.log.1 is created), which seems like a big clue. Is your internal messaging queue (n.b. I don’t really know how Discourse works at all internally) somehow not handling this gracefully?