It looks like 2+2 may not be enough any more… I’m managing a fairly unassuming (no big/fancy plugins, etc) Discourse instance that, as of today, is failing to bootstrap because ember is chewing all the RAM it can find, and all the swap, and grinding the machine into unresponsiveness. Adding another 2GB of swap allowed the bootstrap to complete, with a peak swap usage of around 2.5GB.
Yikes, this is on @david’s list to investigate.
@david has been investigating, we do confirm that as it stands 2gb is enough for docker rebuilds, but not enough for the web upgrader to work.
One idea I have tossed around is just shutting down all ruby processes during the web upgrader to save an extra 300-500mb which would leave enough for asset precompilation.
A long term approach we are likely going to need to go down for self hosters is shipping bootstrapped containers which is a pandoras box cause how would a web upgrader be able to pull that one off. We don’t want to mount docker sockets …
it sure is a pickle.
Well, it wasn’t for me.
Is this compared between basic pure install against real world situations?
Indeed, it’s not perfectly consistent. Even with everything else shut down, it can still fail.
Unfortunately we’re fighting a losing battle against modern JS build tooling here. It’s all designed to run on 8GB+ of Ram on modern developer machines, not 1GB VPSs
We do have some solutions in mind. For example: providing pre-built assets which can be automatically downloaded. The big challenge we have is plugins, because they vary on everyone’s site, and right now they’re integrated tightly into the core build.
But for now, doing a full CLI rebuild should have a higher success rate than a web-ui update.
Like Jagster, 2gb RAM + 2gb swap is not, in fact, enough for my CLI-driven docker rebuild. Checking further, the only plugins on this install are docker_manager
and discourse-prometheus
– neither of which would, I expect, put unexpected load onto ember.
If the minimum specs have to change, that would suck, but it would be a lot better than the current situation, where machines unexpectedly grind to death on every upgrade.
If that’s the case, I think it would still be better to increase the recommended specs a bit. Personally, I don’t really mind adding 2 (or even 4) more GB of swap if it makes rebuilds more reliable - at least as long as daily operations are still fine with 2-4 GB of RAM (for small to medium-sized communities).
Indeed initial install failed in my recent install in 4c 4g instance. Ed suggested creating a swap file. Found the topic to create a swap and created a 4g swap. Now everything working as expected in web or cli update/upgrade
Imho we may just need to accept discourse requires more ram than it used to.
Wouldnt zram make sense?
We just landed this commit which should hopefully improve the situation. Please let us know how you get on! (it’ll hit tests-passed in the next ~30 mins)
When testing with a memory-constrained docker container locally, I can now get a successful build with -m 1600m
. Before this change, the minimum I could achieve was -m 3000m
.