高可用性环境中 Discourse 共享卷的用途

你好!我想知道在 Discourse 部署中,共享卷的作用是什么?

背景是:我们已经在 Kubernetes 集群(GKE)中成功部署并运行了 Discourse,但我们希望扩展部署实例的数量以提高高可用性。显然,所有实例将继续连接到同一个 Postgres 数据库和 Redis 实例,但我想知道所有 Web 服务器是否必须访问同一个共享卷,还是 Web 服务器可以独立扩展(即每个 Web 服务器实例是否可以拥有自己独立的“共享”卷)。

或者,是否存在硬性要求,即所有 Web 服务器必须使用同一个共享卷?如果是这样,我们就需要考虑将类似 NFS 的卷挂载到每个容器中。

谢谢!

The shared volume is there as a value add you can get away without it. In a typical “uploads are on AWS”, PG / Redis somewhere central setup you will only use it for Rails/Unicorn/NGINX etc logs. You would then ship them somewhere central with some log aggregation service.

5 个赞

Perfect, thanks @sam!

Just wanted to check that there weren’t going to be issues with uploads going to one host, and then a request hits another host and isn’t available due to it running in a separate container with a separate mount.

Sounds like we’ll be ok here :+1:.

Note, It will be an issue unless you use our s3 uploads provider

4 个赞