Multi-site Installation

Hi Guys,

We have users scattered across all Asia. To improve performance, we are considering “duplicating” our front-ends to different regions:

  • One server in Tokyo to serve users from South Korea, Japan
  • One server in Mumbai to serve users from middle east and india

We will use AWS Route 53 Latency based routing to find the server with the lowest latency for a given user.

It means that each web discourse front-end must be associated to a different sub domain:
a1-community.mydomain.com
a2-community.mydomain.com
a3-community.mydomain.com

Route 53 will dynamically “redirect” to the users to the server with the lowest latency.

To achieve this we would need to separate discourse into several instances: 4/5 front-ends + 1 back-end (database)

Is this something we can achieve with your multi-site approach?

Thx
Seb

Multisite is for when you want to have several distinct forums on one host.

Not sure how that helps with latency if you only have 1 “back-end”.

Have you considered just using a CDN for the static assets?

3 Likes

Not sure how that helps with latency if you only have 1 “back-end”.

We control the network connection between our datacentres. It is supposed to be very fast (10Gb/sec). In the end it should not be a bottleneck.

We tried Cloudflare and it does not help that much. Don’t you think that moving servers closer to our users would make the navigation smoother? If this is a supported/recommended approach, is there any guidance?

1 Like

In that case you’d want to launch the app containers on your front-end servers and the db + redis containers in your back-end servers.

It’s indeed a supported approach and is documented in

and

https://github.com/discourse/discourse_docker#single-container-vs-multiple-container

3 Likes

That is pretty cool!

Why do we need to run redis on our back end server? Shoudn’t we host the cache engine next to the front end?

Redis is not only a cache engine. It’s used for scheduled background jobs and rate limits for example.

4 Likes

How do you use redis in Discourse? Do you cache all topics and then only query the cache or would it be more efficient to deploy postgres read-replica?

Caching of topic data uses mostly the ActiveRecord in-process caching, but the “anonymous cache” mechanism caches the built HTML page (with JSON data) in Redis.

Also caching: /site.json, /admin dashboard data, random topic selection (Suggested Topics at the bottom).

But most of the time, Redis is used as a transient / short-term data store that is expected to be reliable. Discourse can cope with a FLUSHDB but:

  • you’ll lose some scheduled jobs
    • you’ll drop some e-mails
    • periodic data refreshes will be delayed
  • rate-limits will be reset
  • view-counting will have some performance drop
  • pending social logins will fail
  • some forms of e-mail tokens (backup download, confirm admin grant) will be dropped

But probably the most critical use of Redis is DistributedMutex, which is used to take a lock when important tasks are performed (such as finalizing a new post, uploading files, or sending an email notification).


It would be an extremely bad idea to have separate Redis instances for each application server.

The read-only Postgres replica is a good idea, but I don’t remember if anyone’s successfully done it before? People have certainly tried, I think there were some issues with ActiveRecord picking which connection to use?

3 Likes

Now i’m a little bit confused. I was under the assumption that we could play with the following settings

# host address for db replica server
DISCOURSE_DB_REPLICA_HOST =

# port running replica db server, defaults to 5432 if not set
DISCOURSE_DB_REPLICA_PORT =

Each front-end would target the closest read-replica. However if it’s not something recommended we won’t go for it :slight_smile:

1 Like

Ah, so it was figured out. I couldn’t remember - yes, those are the right settings to use.