Can not be done in a single container setup without major headaches… part of the bootstrap process is spawning the db, you can’t have two instances of a database running at the same time.
How does this work when using a separate data container? Does it run migrations on the data that the old site is still using, potentially breaking the site until the web container is upgraded if the migrations are incompatible?
Yes, but… We are very careful to keep migrations compatible data wise, even have a pretty elaborate scheme that delays dropping columns when we move them to new tables
Should this process have copied my data & configuration over? I ran through the steps but got a clean/“fresh” site, so rolled back to the single-container version.
I did a restore from a backup made just before the process, and it seemed to restore everything. So probably this just needs to be made clear.
I flagged it to ask for wikifing the topic… Let’s amend it
PS: Don’t know whether it’s a right way to use flag…
Yes, that’s the right way to use the notify moderator feature
Can you say a bit more about how this process works in your mind?
After switching to the multi-container scenario, I just did a ./launcher rebuild web_only
but my downtime was the same. Clearly I need more coffee still this morning.
./launcher bootstrap web_only
Start outage
./launcher destroy web_only
./launcher start web_only
End outage
Is there any way to skip the whole uglify and compressing of assets & locales during bootstrap if the assets already exist?
Maybe it would make sense to have a dedicated assets volume and/or a comparison of checksums of input files and output filenames before a recompile is attempted.
Or utilising the cache of a docker image build on bootstrap, which could cache the whole git pull / install process unless git repo changes ( https://stackoverflow.com/a/39278224 )
Yes, I discussed this with @eviltrout in the past, we would need a post build step, an asset server and to customize asset precompile
I estimate it would take less than 2 days to setup
In the event of a failure in the start method, how can you roll back to the previous container?
I must admit this has been a bit of a headache for me as well.
I just went https (smooth process, great) and decided to change the dated “out” email at the same time (basically gmail -> gandi). 4h later and umpteen rebuilds later I’m just reverting to the dated one (at least for today). 10 minutes per rebuild is not fun. Not to mention the time figuring out how to cleanup when I run out of space. I’ll be faster next time .
Do all the settings in app.yml really require a rebuild? One would imagine changing email settings would just change something in one place.
If you change something in env:
or expose:
then you can just do ./launcher restart app
Ok, I’m sure there is somewhere where I can figure out what belongs to which area?
I really try not to ask until I’ve investigated (read: suffered) but this time I apparently/probably should have…
Edit: Ok, looking at app.yml gave me a clear answer, never mind .
Friday and 7h of configuring is not good for anyone. Luckily I’m not 15 anymore, 24h configuring…
Rebuilds are about 25% faster now as we finally skip needlessly recompiling locales.
Not sure what else we could do to make it faster.
Hello, can be an option to prepare new container/image drop the older one. Not like now, removing old container, building new one and starting it.
Want to hide deployment process as much as possible.
UPD: Found the answer -
You can achieve that by moving to a separate container install (separate web & data) data only Really needs to be rebuilt when there is a postgres update. Otherwise, you can just bootstrap, destroy & start the web container and downtime shall come in the likes of 10-15 seconds in general.