So which container was running? Assuming it was app then no, You’re running an old version of discourse-setup.
Do a git pull before proceeding to make sure you’re using the latest version of discourse-setup.
If You got data or web-only container running then You should check what caused the other one to not start. Usually, the web-only container fails to start because there is a process (web server) already running on port 80/443
Maybe so. It is a big help, and like discourse-setup, it’s for a very specific purpose, a new installation that’s very standard. My install scripts have used it for quite a while. It can be an easy way to switch to two container if you’re willing to do a backup on the old container and restore on the new one.
My concern has always been that it would be difficult to support, as folks who won’t understand it will try to use it and then not be able to use any documentation, as “rebuild app” won’t work anymore, and knowing when you need to rebuild the database container is also difficult. I had a rebuild fail recently because redis was 3.0 and 4.0 is now required. And then postgres also had to be updated, which required a sequence of steps to be followed, but you had to know when to rebuild the data container and when to rebuild the web container and how to change the path from what’s recommended. It all went off without a hitch – for me, but trying to communicate that to someone who doesn’t know what bash is in a forum would be frustrating for all concerned.
I think that it might be best to keep the barrier to creating a non standard installation rather high, to protect people from themselves.
Hi Jay @pfaffman Thanks for this post and others on this “two container” topic, including Sam’s writings on this as well.
We have been trying to set up two containers as you mention, one container for data and one for web-only and have been running into a number snags getting this running on macos.
But before we worry about debugging this “two container config” on the mac or Ubuntu, we would like to make sure we are doing this for the right reason.
The reason we want to do the “two container dance” is so the site will not go down when we rebuild the web app, for example when installing a plugin. Also, when we tweak a homegrown plugin; we noticed that sometimes the only way to insure our changes work is to rebuild (that is a story for another day) I’ve also been struggling getting a “fast and friendly” web dev setup going to my satisfaction as a well; but that is another topic for another day.
So, my question is that does the “two container” setup significantly minimize down time when the web-only part of the app is rebuilt?
That’s the right way to think of this, isn’t that right?
When we install a plugin or tweak one, we need to rebuild only the “web-only” yml file and not the data yml ?
We come from a LAMP forum background so changes to plugins can and are mostly done in runtime on the live site (with no down time unless we fat finger something). Also, we hail from some VueJS web apps where we build on the desktop and then we just upload and move the new app into place and there is virtually no down time upgrade / updating a VueJS part of the site. However, with Discourse we get downtime, which we do not want (even a few seconds).
Does the “two container” solution show significant improvements in downtime when we either (1) rebuild the app (for plugins, tweak code, etc) or (2) restore from a full backup?
I feel like I’m going to get “beat up” (again) for asking this question because we are looking for a way to run Discourse in production and make changes with near zero downtime, and we have not yet found a way to do things which are so easy to do with a LAMP or VueJS app (for example).
Hence, the struggle / interest in the “two container” method which we have yet to get up and running.
Yes. The existing web container continues to run while the new container is being built. The downtime, then, is just the time that it takes to spin up the new web server, which is typically under a minute, though by no means a zero-downtime proposition. If you want zero downtime you need a reverse proxy in front that will allow the new container to crank up and start working before you shut down the old one. (And if the database migrations for the new container break things for the old one then you get downtime there unless you go through some other machinations).
You are really a top-shelf valuable resource here, without a doubt!
What do you think about this maybe crazy idea (based on my still limited understanding):
Set up nginx as a reverse proxy on the front end; per this tutorial:
Then have two directories / instances with discourse_docker (standalone) set up, for example:
In both of these instances set up discourse_docker (standalone) to listen on a different socket, modifying this template in each instance:
So, in a nutshell, we have simply rebuilt production (at some quiet time) to run in a different container on listening on different socket (nginx.https.sock2), so there is no socket conflict; which we can build in standalone mode as well (with the goal of eliminating the need need for two containers, data and web-only).
For example (for discussion / illustration), in web.socketed.template.yml in discourse1:
However, instead of having the discourse template do the magic, we simply manually
switch sockets in /etc/nginx/conf.d/discourse.conf and restart nginx, so we would remove the replace: directive in the web.socketed.template.yml template.
In this proposed (maybe crazy idea) configuration, we can have two standalone containers listening on two different sockets (not in conflict) and simply configure ngnix to connect to the socket we wish to connect to and restart nginx.
This seems clear, easy and perhaps useful (during a slow period of zero new posts in the live instance) for those who might not want (or need) the complexity two containers (data and web-only) per a single discourse instance (app)
Of course, the most robust configuration (from a data perspective) however, for perfection for busy sites would be the “two container” solution because we would want the data and the web-only instance (now listening on two different sockets, sock and sock2.
In the “two container solution” with the nginx front-end, the “standard configuration” is to have both web-only containers listen on the same socket, so both cannot run at the same time; but if (for example only) we had them listen on a different socket, they could both run at the same time and we could just use the nginx config file (and a nginx restart) to switch between the two.
Is this the right understanding?
Am I starting to (slowly but hopefully surely) understand this?
Followup Note Only: I have the “two container” config working on one of my desktop macs:
Thanks for confirming. That is exactly how we are set up (single directory) after experimenting today.
All has gone well on the 2 containers config (2CC) but struggling with the nginx reverse proxy setup on macos.
Cannot get a working connection to the unix domain socket in the /shared directory even though the socket is accessible outside the container. Tried with nginx and also python and socat (testing). Always a 61 connection refused error., hmmmm