High availability/microservice architecture

I admin a very active community that averages 180K users daily and I cannot really scale vertically anymore. We need to scale horizontally. Before it is mentioned, I don’t particularly care about what is “officially supported” I only really care about what is technologically possible.

Right now we have an Ubuntu instance running discourse in docker with the official installation instructions, S3 storage is used as well as a CDN serving that.

I plan to migrate to a NixOS flake im writing. From doing very in depth resource usage investigations I notice that CPU is mostly bound by postgres and Unicorn workers (we have optimized this to be as light on CPU as we can while keeping performance. There is 16 workers)

As well MEM is mostly bound by postgres and redis

If possible, I want to isolate these all into 3 servers
Discourse Frontend (with unicorn)
Redis server
Postgres server

I have already successfully made a discourse server with postgres on another server but im unsure if I can move the redis elsewhere and if that would make sense performance wise.

Does anyone else currently run discourse like this?

You can run redis on another server if you think it’lll help. It’s fairly common to use AWS Elasticache on AWS, for example, so there’s no problem sticking it wherever you think there’s performance and bandwidth to handle it.

2 Likes