Discource in Kubernetes (K8S)

Hello. My company is trying to setup the Discourse in our infrastructure for the user content. So I created the image with Ruby code and Helm chart. I deployed the application to k8s without any problem. But our content manager got problems with the admin page - it showed an old value for changed field. I checked the database - everything was good. I saw a new value in the site_settings table. So I decided to investigate how server was started. I found that the puma app server was started with 4 workers. I decreased a workers to 1 and problem was solved.
So my question is: how should I fix this for clustering setup? Does shared setting cache exist for the Discourse?

1 Like