Purpose of the Discourse shared volume in a high availability setup

Hi! I’m wondering what the purpose of the shared volume is in a Discourse deployment?

For context, we have Discourse up and running in a Kubernetes cluster (in GKE), but we’d like to scale out the number of instances of our deployment to make it more highly available. All instances would obviously continue to talk to the same Postgres database and Redis instance, but I’m wondering if all the webservers need to be talking to the same shared volume, or whether the webservers can be scaled independently (i.e. can each webserver instance just can have it’s own “shared” volume).

Or is there a hard requirement that all webservers utilize the same shared volume, in which case we’d have to look at mounting in something like an NFS volume into each of our containers.


The shared volume is there as a value add you can get away without it. In a typical “uploads are on AWS”, PG / Redis somewhere central setup you will only use it for Rails/Unicorn/NGINX etc logs. You would then ship them somewhere central with some log aggregation service.


Perfect, thanks @sam!

Just wanted to check that there weren’t going to be issues with uploads going to one host, and then a request hits another host and isn’t available due to it running in a separate container with a separate mount.

Sounds like we’ll be ok here :+1:.

Note, It will be an issue unless you use our s3 uploads provider