How to scale discourse?

Folks, I’ve seen the question a few times, but really haven’t seen any good answers…

How does one scale and make Discourse highly available?

Like, how can I setup a PGSQL cluster, and have a bunch of web “front-ends” talking to the cluster, and storing data on a shared NFS for the web front-ends? Using haproxy in front for the load balancers, docker containers for the web side, and a PG cluster on the backend?

Also, does Discourse have the ability to use read nodes and write nodes on the db side like Wordpress, Xenfo, and some others do?

Thanks!
Tom

2 Likes

This is covered in the advanced install instructions. Basically a lot more Docker containers, broken down per service (web/sql/redis), and in front of a load balancer.

3 Likes

Thanks Jeff, I’ll give it a read again, but it wasn’t clear to me the first time through how to separate all the pieces to scale it.

Tom

1 Like

Hi Tom,

You have rails on the front-end, redis, sidekiq, and postgres.

  1. nginx/rails can be hosted on multiple servers.

  2. With redis you would have to scale the server out, or create multiple instances with replication. This would require some archectual changes to discourse but you could also partition the redis ‘schema’ so you are spreading the load between different redis instances based on the task etc.

  3. sidekiq is creating multiple instances, and round-robin between them or partitioning the usage like #2 (reference: Google Groups)

  4. Setting up a cluster of PS servers like you said.

Jeff, where is that advanced install page? Is it this? https://meta.discourse.org/t/advanced-setup-and-administration/15929

1 Like

Here you go: https://github.com/discourse/discourse_docker/blob/master/README.md

1 Like