Using the docker container ./launcher script how would I rebuild the web_only container before taking the old one offline?


(chamunks) #1

#The problem
I want to be able to rebuild/upgrade the web_only container without interrupting the live forum. Is it possible to do this without having to first destroy the web_only container as per how rebuild works.

./launcher rebuild:    Rebuild a container (destroy old, bootstrap, start new)

What I would imagine would work.

I imagine what I would do is a series of the following commands rather than just running ./launcher rebuild which destroy’s the old container thus preventing access to the site.

./launcher bootstrap web_only
./launcher stop web_only
./launcher start web_only

I can’t imagine that I’m really doing this right.

My request/question

I would love if someone would make suggestion on how to do this correctly.

(Jeff Atwood) #2

Do you have two web containers serving requests, with a load balancer in front like haproxy? I don’t think this is possible without two web containers… one can be down while it is updating and the other picks up the slack. Then the new one takes over as the other is updated.

(chamunks) #3

From what I understand docker is designed to allow you to build images prior to deploying them. So you can go from having to completely interrupt a service during upgrades you stage the upgrade and when you have an container thats prebuilt and ready you can just destroy the old container and replace it with the upgraded container.

What it seems is that the launcher included in discourse-docker doesn’t quite conform to this mentality.

Also thanks for your speedy reply.

(Jeff Atwood) #4


(chamunks) #5

I don’t imagine load balancing really needs to happen. I mean as excited as I would be about learning about how to do this. I’m not too concerned about perfect HA(High Availability) cluster level uptime but taking the web container offline for the 5 minutes required to rebuild the web container is a bit awkward.

This Stackoverflow response may explain better what I mean.

An semi unrelated topic with the explanation in the response

The Docker way to upgrade containers seems to be the following:

Application containers should not store application data. This way you can replace app container with its newer version at any time by executing something like this:

docker pull mysql
docker stop my-mysql-container
docker rm my-mysql-container
docker run --name=my-mysql-container --restart=always
-e MYSQL_ROOT_PASSWORD=mypwd -v /my/data/dir:/var/lib/mysql -d mysql
You can store data either on host (in directory mounted as volume) or in special data-only container(s). Read more about it here, here, and here.

Upgrading applications (eg. with yum/apt-get upgrade) within containers is considered to be an anti-pattern. Application containers are supposed to be immutable, which shall guarantee reproducible behavior. Some official application images (mysql:5.6 in particular) are not even designed to self-update (apt-get upgrade won’t work).

###A very rough solution

  1. Remove this section then after the bootstrap section you can
    destroy the currently running container.
  2. Then instantly boot the replacement in its place.

(Brahn) #6

Does this help?

(chamunks) #7

I wonder if there could be a case added to the launcher that might handle this process since we already have the ./launcher rebuild $var case. Which already does a list of different functions.

(Jeff Atwood) #8

I am unclear how that would work, since bootstrap in my experience takes around 8 minutes. Maybe @sam can clarify.

Also that conversation was specific to plugins, and we handle plugins differently now. I’m inclined to delete that other topic outright…

(Sam Saffron) #9

If you are doing web only you can bootstrap while your old container is running … so

./launcher bootstrap web_only
# outage start
./launcher destroy web_only 
./launcher start web_only
# outage end

Expect a 10 or so second outage which is better than 8 minutes.

(chamunks) #10

I don’t write super fancy scripts but I have a few bash powers I can try and contribute to get the ball rolling maybe if you want I can PR that Monday or Wednesday next week.

That said I might look into the concept of multiple web containers because I want to actually have a development instance that runs next to the live instance.

(Kane York) #11

You can do that with just standalone.yml

cp samples/standalone.yml containers/app.yml
cp samples/standalone.yml containers/staging.yml

# Change staging to have different 'shared' folder name in volumes:

# then...

./launcher rebuild staging

You’ll also want to follow the directions here:

Set up two nginx “sites”, and the staging server’s unix socket will be in the second ‘shared’ folder name you picked.

(chamunks) #12

@riking Oh thanks I read this at first and was like how does this apply exactly. But then I realized what you were suggesting. Yeah I can’t really run with the standalone container as I want the data container to not really be required during the rebuild process.

(Kane York) #13

I remain utterly unconvinced that anyone who has had questions setting up a split web/data container actually needs it. Most of your upgrades are done through the UI with zero downtime anyways.

Similary, needing HA infrastructure is a nice problem to have. As in, “wow, it’s nice that we’re so popular that we’re having this problem”.

Almost everyone is fine with just a single server.


Maybe it would be better to separate nginx and rails into separate containers. When rebuilding rails, bring up a new container, then point nginx at it, then shut down the old one. Still not quite perfect, as it’d be better if the connections still running on the old rails container could complete while new connections go to the new container. IS there a way to do this?

dokku handles this sort of switching in of rebuilt containers. It might be worth a look at how they do it.


Could the rebuild command be made to recognise that this container is web only, and do the right thing?

Second best would be to have another command like rebuild that runs the appropriate sequence of steps.

In either case there should be a test that the new container has come up successfully before it’s switched in.