Looking up available upgrades via CLI

I’m looking at automating upgrades of discourse. This is for the most basic single container install. Trying to keep things simple.

The most basic would be to setup up this to run via cron:

cd /var/discourse
git pull
./launcher rebuild app

Problem is you take the service offline for the rebuild even if there are no updates.

I’d preferebly like to do something like this:

if ./update_available.sh; then
  cd /var/discourse
  git pull
  ./launcher rebuild app
fi

(assuming the script exits with true if updates are available)

Anyone know of any easy methods to get this info on the host?

Thanks!

I’d suggest looking into how GitHub - discourse/docker_manager: plugin for use with discourse docker image works. It compares the running version to GitHub, and updates with close to no downtime. Perhaps you can automate that instead of doing a full rebuild each time?

1 Like

You might check out having separate data and web containers. That way you don’t need to take the site down to build a new image.

Thanks for the suggestion. Interesting path, definitely.

I’ve found what looks to be the function that initiates the update:
https://github.com/discourse/docker_manager/blob/a7b256245feba7f54a5314595a527d047b5ffb91/lib/docker_manager/git_repo.rb#L12

But that can only be kicked off when I have detected the local git version, the version available and compared the two. So I would need to take care of that to mimic the user interaction with the interface, which is starting to look a bit complex.

It would, as you point out, be a faster upgrade compared to rebuild. But if I already have local and remote git commit, I would be okay with using that in bash and just having the upgrade take a bit longer over poking this deep into discourse.

1 Like

Thanks for the suggestion.

I may need to dive into the documentation a bit more to understand fully.

Is my understanding right that you are suggesting that I just build a new web container and then switch traffic over to the new one and pull down the old when done? I.e. “containers/web1.yml” and “containers/web2.yml”.

I would need a load balancer or traffic forwarder of sorts in front of the containers, right?

See How to move from standalone container to separate web and data containers

1 Like

Thanks for the link. Reading that thread and this post made it clear.

The new web container can be build behind the scenes and then switched over. So I would end up with:

cd /var/discourse
git pull
./launcher bootstrap web_only
./launcher destroy web_only
./launcher start web_only

Over

cd /var/discourse
git pull
./launcher rebuild app
1 Like

or

./launcher bootstrap web_only && ./launcher destroy web_only && ./launcher start web_only

That way, if the bootstrap fails, you won’t kill your working server.

5 Likes

Yes, very good point. Thank you!