Self-hosted upgrade to 3.1.0.beta2 with typical multi-container install requires extra downtime

I’d like to repeat that I was not using the GUI updater. I have a multi-container install. I did:

git pull
./launcher bootstrap app
./launcher destroy app && ./launcher start app
./launcher cleanup

(I use app for the web app even for multi-container installs. I know it’s not normal practice. I hate typing web_only)

A sometime after I started bootstrap and before I destroyed the app, the old version running against the new database showed only an error screen. I don’t remember the contents, and I didn’t create a longer outage by stopping to take a screenshot before doing the destroy/start, but it was only text on white and was not the system maintenance page. I have seen this only a few times before, that when the bootstrap runs db:migrate as part of the asynchronous “zero-downtime” rebuild, the old software still running fails due to a schema inconsistency.

What I saw was whatever happens in the case of database inconsisency. That’s way better than blissfully soldiering on, breaking the database! When I posted, it was to warn that this was one of those rare cases where applying a point update (here from 3.1.0.beta1 to 3.1.0.beta2) created a schema incompatibility between the 3.1.0.beta1 code and the database after running the 3.1.0.beta2 db:migrate, as happens rarely but occasionally with the normal low-downtime updates in the multi-container deployment.

My experience is different from the error that has been reported with ruby in the GUI updater. It’s a completely unrelated problem. I recognize that my post was moved out of the announcement into a general “problems with” thread, but I want to be clear that the reason I posted it in the announce was to warn other self-hosters like me when they saw the announcement that this particular update was one that could have this impact.

My message was not complaining about a bug, or even a problem. It was intended only as notice of a normal but infrequent case associated with this particular release and not called out in the release notes.

The complaints about the docker manager not recognizing that it can’t update from within the image are completely unrelated to my attempt to provide a helpful notification to other self-hosting admins.

It would make a lot of sense to separate these unrelated issues into independent threads for independent problems. EDIT by @supermathie: Done

1 Like