Precompiling assets takes 20 minutes

I am rebuilding the image on a Digital Ocean droplet and something takes ages:

I, [2024-01-10T09:47:14.854311 #1]  INFO -- : > cd /var/www/discourse && su discourse -c 'bundle exec rake themes:update assets:precompile'
Node.js heap_size_limit (492.75) is less than 2048MB. Setting --max-old-space-size=2048.
[WARN] (broccoli-terser-sourcemap) Minifying "assets/admin.js" took: 25461ms (more than 20,000ms)
[WARN] (broccoli-terser-sourcemap) Minifying "assets/plugins/chat.js" took: 47818ms (more than 20,000ms)
Purging temp files
Bundling assets
I, [2024-01-10T10:06:07.644096 #3264]  INFO -- : Writing /var/www/discourse/public/assets/break_string-cc617154cd957804f2f6a1f3bc68258c9cdca3d4b9a322bf777d145fed04790e.js

The droplet has 1 GB of RAM and otherwise runs Discourse just fine. Am I doing something wrong? Can I do something to speed up the rebuild? Thank you!

1 Like

I believe this will make you very memory constrained now.

It’s getting to the point where I’d actually recommend a minimum of 4GB for a Discourse instance (plus swap!) (even with 2GB + 2GB swap I find online updates painfully slow).


Thank you! Unfortunately that’s about four time the price for an improvement I would probably only feel during updates. Also, the cloud install guide still says:

The default of 1 GB RAM works fine for small Discourse communities. We recommend 2 GB RAM for larger communities.

Do we know where the memory pressure in this step is coming from? Perhaps it would be possible to trade off worse compression ratio or something for decreased memory requirements?

It’s coming from ember-cli.

You’re already experiencing a time vs. space trade off (lack of memory space causes the process to take longer).


Related topic:

1 Like

I think for my next update on my two servers, I will use the hosting provider’s flexibility to migrate to larger RAM before updating, and migrate back to my present minimum immediately afterwards. There’s a small amount of extra downtime, but if the rebuild is much quicker then it could be an overall win. The extra expense should be under $1 or perhaps even just 1 cent, for an hour of extra RAM (in my case, from $6 per month to $12 per month, charged hourly on Digital Ocean at respectively 1 cent and 2 cents.)

As noted in the linked thread, sometimes a reboot is helpful in any case, so it’s a good time to update OS packages and reboot, for me.

I’m hoping that this will cause less wear and tear on me, too.

I might in fact choose to go up from 1G to 8G, which will cost an extra 6 cents per hour, to give myself the freedom to delete my swapfile temporarily, and ease the disk space crunch.

Everything peaks at update time - in between times, the present minimal configuration still seems to be adequate.

I can certainly afford 6 cents per upgrade cycle.


That’s very cool! Who is your hosting provider?

Digital Ocean in one case (1G RAM), Hetzner in the other (2G RAM).

They both allow online, in-place increases in RAM temporarily?

Or you need to shuffle between “droplets”/instances?

Or just a reboot?


It’s shutdown - resize - start - rebuild - shutdown - resize back - start


OK, but still in-place. That’s a great option, but yeah, extra hassle … and downtime.

Given that rebuilding time on a 1GB machine takes so long you might as well do this, because it’s going to be down for 30 minutes anyway!

And sure, if you are prepared to do that then even a 16GB machine upgrade temporarily might work out ok cost wise :slight_smile:

I suspect many will consider their time more valuable and should probably start thinking of 4GB + on a permanent basis.

1 Like

It’s certainly part of a tradeoff of cost vs time. For myself, I’ve already committed to do an hour of babysitting for the upgrades, and I know fully how to do this sysadmin dance, so the time is already booked. I prefer to keep the monthly running cost as low as I can, even if it does take me some time - others will have other tradeoffs.

For sure, if spending money comes easily, get a comfortably large instance!

1 Like

Just for reference, I just updated my two forums, both done within an hour elapsed, in both cases I temporarily resized to 8G RAM and back again. This particular step took about 5mins, with (temporarily) 4 CPUs and 8G RAM.

I, [2024-01-10T16:07:58.323464 #1]  INFO -- : > cd /var/www/discourse && su discourse -c 'bundle exec rake themes:update assets:precompile'
110:M 10 Jan 2024 16:08:52.047 * 100 changes in 300 seconds. Saving...
110:M 10 Jan 2024 16:08:52.048 * Background saving started by pid 3276
3276:C 10 Jan 2024 16:08:52.384 * DB saved on disk
3276:C 10 Jan 2024 16:08:52.386 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 0 MB
110:M 10 Jan 2024 16:08:52.449 * Background saving terminated with success
Purging temp files
Bundling assets
MaxMind IP database updates require a license
Please set DISCOURSE_MAXMIND_LICENSE_KEY to one you generated at
MaxMind IP database updates require a license
Please set DISCOURSE_MAXMIND_LICENSE_KEY to one you generated at
I, [2024-01-10T16:12:14.362017 #3300]  INFO -- : Writing /var/www/discourse/public/assets/break_string-cc617154cd957804f2f6a1f3bc68258c9cdca3d4b9a322bf777d145fed04790e.js

Here we see ember (column 12) is using 2.5G RAM (column 6) and more than one CPU (column 3)

# ps auxfc|egrep -A29 containerd
root      1097  0.2  0.5 1510892 44924 ?       Ssl  16:00   0:01 containerd
root      4507  0.1  0.0 717892  7556 ?        Sl   16:03   0:00  \_ containerd-shim
root      4530  0.1  0.3 312292 30512 ?        Ssl  16:03   0:00      \_ pups
systemd+  4609  0.0  0.3 213236 28608 ?        S    16:03   0:00          \_ postmaster
systemd+  4617  0.0  0.8 213340 67288 ?        Ss   16:03   0:00          |   \_ postmaster
systemd+  4618  0.0  0.0 213236  5876 ?        Ss   16:03   0:00          |   \_ postmaster
systemd+  4619  0.0  0.1 213236 10076 ?        Ss   16:03   0:00          |   \_ postmaster
systemd+  4620  0.0  0.1 213904  8860 ?        Ss   16:03   0:00          |   \_ postmaster
systemd+  4621  0.0  0.0  68004  5592 ?        Ss   16:03   0:00          |   \_ postmaster
systemd+  4622  0.0  0.0 213796  7100 ?        Ss   16:03   0:00          |   \_ postmaster
message+  4682  0.2  0.4  87976 35724 ?        Sl   16:03   0:00          \_ redis-server
1000      7722  1.1  0.0      0     0 ?        Z    16:07   0:01          \_ esbuild <defunct>
root      7736  0.0  0.0   2476   520 ?        S    16:07   0:00          \_ sh
root      7737  0.0  0.0   9296  4156 ?        S    16:07   0:00          |   \_ su
1000      7738  8.3  0.0   2476   580 ?        Ss   16:07   0:12          |       \_ sh
1000      7835  0.4  0.9 929524 78416 ?        Sl   16:08   0:00          |           \_ node
1000      7857  0.0  0.0   2484   524 ?        S    16:08   0:00          |               \_ sh
1000      7858  156 30.5 67413228 2491796 ?    Sl   16:08   3:37          |                   \_ ember
1000      7876 39.0  1.7 11486300 145476 ?     Ssl  16:08   0:44          |                       \_ node
1000      7882 36.7  1.5 11466956 122648 ?     Ssl  16:08   0:41          |                       \_ node
1000      7889 37.1  4.1 11647592 340908 ?     Ssl  16:08   0:42          |                       \_ node
1000      7761  1.5  0.0      0     0 ?        Z    16:08   0:02          \_ esbuild <defunct>

Probably 4G RAM would have been enough for me, but as noted this whole thing only cost a few cents. (I see now that I could have chosen faster CPUs for an extra cent.)

Edit: I took a backup before I started and another after the job was done, and they were 35 mins apart. So the downtime as seen by the users was no longer than that.

Edit: note that Digital Ocean control panel says the resizing operation may take up to 1min per GB of data on the disk - in my case only 14G and as it turned out only 2 mins for each resize. But if you have a great deal of data on the instance, this resizing dance might take longer. (Then again, if you have a great deal of data you are perhaps not trying to run in less than 4G of RAM)


4GB RAM is still not enough in some cases. For example, I have an 8GB RAM sandbox with virtually no traffic but is a multisite setup to allow for having 5 disposable sandboxes. Rebuilding today failed due to Error 137 (OOM) and I had tried the trick @richard suggested above. However, to save myself from the hassle of doing this everytime, I’ve created a larger swap (4GB) which seems to have allowed the rebuilds to happen for the time being. It seems like we’re just upgrading servers in the last 1 year because discourse rebuilds are really getting RAM hungry for some reason.


Interesting. Do you have the kernel settings as laid out in MKJ’s Opinionated Discourse Deployment Configuration?

(It’s always worthwhile to have swap, 2G or 4G or whatever free disk space will allow. I have minimal swap because I have minimal disk space.)

Thinking about this, the benefit is really limited to full rebuilds - I cannot currently use online upgrades in a 2+2 config :frowning: … and I don’t think I’m going to be doing this upgrade/downgrade dance just to update e.g. a single plugin …

I personally feel a permanent upgrade to at least 4GB is the only way …

Note: I’m not really grumbling about having to move with the times … but we should perhaps start reflecting reality in the documentation and advice to administrators?

It does unfortunately make Discourse a little less accessible to new, especially younger people though :thinking:

1 Like

I am in fact on board with this idea: keep the present minimum recommended configuration as a target and look into tweaks in the code or changes upstream to keep the lid on things. It’s a major change in the offering if minimum config is now twice the price. Which is why I opined elsewhere that excessive memory requirements is a bug.


Now I am getting failed upgrades when trying to upgrade to most recent version:

error Command failed with exit code 137.
info Visit for documentation about this command.
#<RuntimeError: RuntimeError>
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:210:in `run'
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:111:in `upgrade'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:19:in `block in <main>'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `fork'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `<main>'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/railties- `load'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/railties- `perform'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/thor-1.2.2/lib/thor/command.rb:27:in `run'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/thor-1.2.2/lib/thor/invocation.rb:127:in `invoke_command'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/thor-1.2.2/lib/thor.rb:392:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/railties- `perform'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/railties- `invoke'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/railties- `<main>'
<internal:/usr/local/lib/ruby/site_ruby/3.2.0/rubygems/core_ext/kernel_require.rb>:37:in `require'
<internal:/usr/local/lib/ruby/site_ruby/3.2.0/rubygems/core_ext/kernel_require.rb>:37:in `require'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/bootsnap-1.16.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
bin/rails:18:in `<main>'
Spinning up 1 Unicorn worker(s) that were stopped initially

I take it this is an out-of-memory error? Does it mean that 1GB machines are officially out?

Indeed that’s an out of memory error. If you have the disk space to add swap, that will be enough, although the process will take more time than if you were to add RAM. Your hosting provider might offer the chance to upgrade RAM temporarily and then revert, which will probably cost you a couple of reboots, a little down time, and a few cents of extra cost.

Edit: to be clear, memory = RAM + swap. RAM is fast and swap is cheap.