Hi,
I’m already running the following and need to understand how to configure a Discource container with a production environment:
-Postgres Server
-Nginx (reverse proxy for all sites)
-Redis
-Docker cluster
My intention is to run a Discourse container and scale as needed. However, I need to protect data as done in production.
I’ve read both pages in detail, as well as a ton of other information. Just a couple of points I’ll make:
In a production HA environment, it is not acceptable to have a single container run Rail,Redis, Postgresql, and Nginx.
It’s not clear to me if the current Discourse Docker solution will run properly on an existing Docker Cluster with many other containers already running
Our environment already has HA Postgresql, Redis, and a Docker Cluster - These should be leveraged
The Discourse rails app should have the ability to start with a ‘docker stack deploy’ function, not a ./launcher script.
I’ve run many rails containers before for enterprise applications. What I’m looking for is to understand if I’m going to need to build my own Discourse container (I hope not). And if not, where can I get the Discourse container (preferably from a ‘docker pull’ command) and start it with ‘docker stack deploy’ with a matching docker-compose.yml file.
I also need to know exactly what folders and files from inside the container I need to mount to the host (all our persistent data runs from a very fast NFS mount).
So, to recap, I intend to have a docker-compose.yml file that contains all the information to start the Discourse Rails app, including mounts, and the ability to scale replicas.
Sam, I appreciate the info. I’ll read through all of it and let you know shortly.
Note: It seems to me that the work required here to make this type of solution available is to carefully read through ‘launcher’ and produce deployment instructions that include the creation process and docker-compose.yml. I plan on doing this because I need it for myself. I’ll host my instructions on my own github repo for this (which doens’t exist yet) and link it back here.
I am sitting with same question here in 2020. Docker swarm, Kubernettes and Rancher seems to be very popular orchestration frameworks and seems to be the preferred method for running containers in production environments, This project makes it very difficult to setup since you have a tool that generates and launches the containers. Why adopt shipping your project on container technology so very early on, only to frustrate users later in the future by not keeping up with the said technology?
Is there any approach we can take to make this run inside our cluster? Can we save the built images from the script to a private docker registry to in turn add to our docker-compose.yml files that could be used for stack deployments?
They bought bad crystal balls to predict the future. Told em to buy better ones for the next project
Being serious, like you asked it’s easy to save the resulting image in a container registry and deploy it afterwards using the tool you like the most.
And since you can run said container using some bash scripts, puppet, chef, ansible, terraform, backed into the AMI, user-data script, docker swarm, docker compose, kubernetes, capistrano, AWS ECS or many other ways it would be very out of scope for us to dictate your infrastructure management.
@Falco, Thank you very much for the input it is appreciated.
I have tried convincing our client that they should buy a business licence subscription for Discourse but they are adamant that they want Discourse hosted on their own on-premises infrastructure.
It would have been great if Discourse just sold support contracts since the client is willing to pay for services. They are already paying for Elastic on-prem orchestration for example.
This is a very risky, dangerous business to get into – because you get blamed for all the customer’s problems, and you typically have no (or extremely limited) access to their actual infrastructure nor do you have authority to change anything, even if you did!