Discourse Fails to update in the Admin UI - No matter what

When updating using the UI updater in the admin section - It always fails. It has failed since I installed Discourse over a year ago. I can easily SSH into my server and update manually, but it is frustrating to have a feature that is not behaving correctly.

Since Discourse is ran with Docker, and I am not strongly knowledgeable with it, I would like to know if anyone has any issues like this as well, and how can I fix it.

In Short: UI Updater Fails always, Command Line works first try and I would like to fix it where I do not have to SSH into the server (as often).

Thanks!

1 Like

Hi, same thing, this has been the case for months.

If you connect to your server by SSH, what deos free -h return?

I realize after doing some digging it could be that I am using the bare bones amount of RAM recommended but, this is for a private install with less than 50 users, so I really do not need to go above the minimum for my use case.

image

i have no more than 3 simultaneous users and need more than that. I think you need to upgrade your VPS


where are the majority of your users located?

I used to be able to do an update with 2G RAM + 2G swap but that was, I think, before ember, which is very demanding. If you have the disk space to goup to 4G swap, that might do it. Or, temporarily and carefully migrate to an instance with more RAM, do the update, and migrate back.

Whatever you do, take a backup and download it first.

Note that most providers will allow to you migrate to the same disk or a bigger disk. You can’t migrate to a smaller disk. So you need to find an offering which has more RAM but the same disk space.

Eventually I changed provider to get a bigger machine at lower cost. (4G RAM and 40G disk)

1 Like

I will take a look at the swap option, and see if that helps. I have 2GB RAM, and 80GB disk.

Unfortunately, my provider does not support automatic changes to system resources, but I also don’t want to pay more than $5.

Thanks for the help.

i used to have 2GB RAM and 40GB disk, relying on .\discourse-setup to configure swap, web UI updates were slow

this Ionos USA hosting might be worth looking at

Yeah, I’m on the tier below, after a year it’s $8 a month. Sadly.

I know Contabo provides VPSs for good prices.


You did say under 5…

1 Like

Wow that’s a good price. I’ll take a look. Thanks!

2 Likes

In the US market, both Contabo and IONOS allow inbound port 25, which is critical for mail-receiver configurations — so there’s no functional limitation there.

The real difference lies in reliability and support reputation:

  • Contabo (Trustpilot 4.2/5, ~6,700 reviews) offers aggressive pricing and high specs, but US-based users often report high latency, slower support response, and performance instability, especially under load. Contabo’s US data centers exist, but they’re not always as responsive as expected.

  • IONOS (Trustpilot 4.5/5, ~31,000 reviews) performs better in the US than many assume. It has a stronger support reputation and more reliable infrastructure, with fewer 1-star reviews (~10% vs Contabo’s 16%). Users consistently report better uptime, live support, and account management compared to Contabo.

Conclusion (US):
If you’re US-based and need stability, fast support, and low risk for production workloads, IONOS is the safer pick. Contabo may still be worth considering for dev/test environments or cost-sensitive deployments, but expect trade-offs in latency and support quality.

3 Likes

Mine started doing it too, completely fine for 4+ years now it fails. Although sometimes it claims it fails, but when I refresh everything is up-to-date and there’s nothing to update again. But it almost always ends with

ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL Command was killed with SIGKILL (Forced termination): ember build -prod /var/www/discourse/script/assemble_ember_build.rb:103:in `system': Command failed with exit 1: pnpm (RuntimeError) from /var/www/discourse/script/assemble_ember_build.rb:103:in `<main>' Docker Manager: FAILED TO UPGRADE

In almost all cases, it would be very helpful to see the previous 50 to 200 lines of the output. It’s a pity the scripts don’t advise it.

That’s what I was wondering- was if it was connected to an issue with the code itself and not so much the hardware of my server.

I guess my next possibility is just writing my own plugin with a script to manually update myself.

Glad others have the same issue so it isn’t just me (I know it sucks). Maybe someone who develops actively with Discourse can check into it. I also wish there was better debugging info more so than it just “fails”.

Ed, I will see if I can get that for you. And post it right away.

I’m not a developer or an expert when it comes to servers and all that. I went with Digital Ocean, just because it was the one mentioned in the official installation instructions, and because I’ve seen that name mentioned over and over again over the years.

At the moment I’m on the second lowest plan which is $6 a month for a server that seems to be way “slower” than the ones offered by Contabo or IONOS. Since the minimum for a good Discourse performance is at least 2GB of RAM, I would have to upgrade the $12 plan. For the Contabo $4.95 a month, I would get 8GB… it’s a “small” difference :wink: both in price and RAM, not to mention the disk space, etc.

So, asking you and any other users who are experienced, does it make sense that I migrate my Discourse to Contabo, for example, instead of staying with Digital Ocean? Even though I’m still building the whole community and all that, so far DO has been ok, apart from the issue of updating Discourse on the web, even with a swapfile of 4GB (because my disk space is just 25GB), but I don’t want to migrate everything to then start noticing other issues.

I found this page, but I’m not sure how reliable these tests really are and if they are enough to make me switch?

Any feedback would be greatly appreciated!
Thanks! :raising_hands:

1 Like

This completely destroys what Digital Ocean offers for $6 a month with just 1GB of RAM…

Would you recommend switching?

********************************************************
*** Please be patient, next steps might take a while ***
********************************************************
Cycling Unicorn, to free up memory
Restarting unicorn pid: 1580
Waiting for Unicorn to reload.
Waiting for Unicorn to reload..
Waiting for Unicorn to reload...
Waiting for Unicorn to reload....
Waiting for Unicorn to reload.....
Waiting for Unicorn to reload......
Waiting for Unicorn to reload.......
Waiting for Unicorn to reload........
Waiting for Unicorn to reload.........
Waiting for Unicorn to reload..........
Waiting for Unicorn to reload...........
Waiting for Unicorn to reload............
Waiting for Unicorn to reload.............
Waiting for Unicorn to reload..............
Stopping 1 Unicorn worker(s), to free up memory
Stopping job queue to reclaim memory, master pid is 1585
$ cd /var/www/discourse && git fetch --tags --prune-tags --prune --force
$ cd /var/www/discourse && git reset --hard HEAD@{upstream}
HEAD is now at 20ff23ed0 DEV: remove redundant translations for disabled new topic btn (#33929)
$ bundle install --retry 3 --jobs 4
Bundle complete! 160 Gemfile dependencies, 207 gems now installed.
Gems in the groups 'test' and 'development' were not installed.
Bundled gems are installed into `./vendor/bundle`
3 installed gems you directly depend on are looking for funding.
  Run `bundle fund` for details
$ if [ -f yarn.lock ]; then yarn install; else CI=1 pnpm install; fi
Scope: all 16 workspace projects
Lockfile is up to date, resolution step is skipped
Already up to date

Done in 2.9s using pnpm v9.15.9
$ LOAD_PLUGINS=0 bundle exec rake plugin:pull_compatible_all
discourse-custom-wizard is already at latest compatible version
docker_manager is already at latest compatible version
$ SKIP_POST_DEPLOYMENT_MIGRATIONS=1 bundle exec rake multisite:migrate
Multisite migrator is running using 1 threads

Migrating default
Seeding default
*** Bundling assets. This will take a while *** 
$ bundle exec rake themes:update assets:precompile
Updating themes with concurrency: 10
[db:default] 'Air Theme' -  checking...
[db:default] 'Air Theme' -  up to date
[db:default] 'Modern Category + Group Boxes' -  checking...
[db:default] 'Modern Category + Group Boxes' -  up to date
[db:default] 'Clickable Topic' -  checking...
[db:default] 'Clickable Topic' -  up to date
[db:default] 'Search Banner' -  checking...
Node.js heap_size_limit is less than 2048MB. Setting --max-old-space-size=2048 and CHEAP_SOURCE_MAPS=1
Existing build is not reusable.
- Existing: {"ember_env"=>"production", "core_tree_hash"=>"cd74e4ac33647244c041061633d6ca67f9166e5c"}
- Current: {"ember_env"=>"production", "core_tree_hash"=>"7ac67590cc51e22690a2711b593892cd1d266781"}
Running full core build...
Building
Environment: production
The setting 'staticAddonTrees' will default to true in the next version of Embroider and can't be turned off. To prepare for this you should set 'staticAddonTrees: true' in your Embroider config.
The setting 'staticAddonTestSupportTrees' will default to true in the next version of Embroider and can't be turned off. To prepare for this you should set 'staticAddonTestSupportTrees: true' in your Embroider config.
building...
...[ConfigLoader]
...[Babel: @embroider/macros > applyPatches]
...[Babel: @ember/legacy-built-in-components > applyPatches]
...[Babel: ember-source > applyPatches]
[BABEL] Note: The code generator has deoptimised the styling of /var/www/discourse/app/assets/javascripts/discourse/ember/ember-template-compiler.js as it exceeds the max of 500KB.
[BABEL] Note: The code generator has deoptimised the styling of /var/www/discourse/app/assets/javascripts/discourse/ember/ember.js as it exceeds the max of 500KB.
...[Babel: @glimmer/component > applyPatches]
...[Babel: @ember/test-waiters > applyPatches]
...[Babel: ember-this-fallback > applyPatches]
...[Babel: ember-cache-primitive-polyfill > applyPatches]
...[Babel: select-kit > applyPatches]
...[@embroider/compat/app]
...[@embroider/webpack]
...[@embroider/webpack]
...[@embroider/webpack]
...[@embroider/webpack]
...[@embroider/webpack]
...[@embroider/webpack]
...[@embroider/webpack]
undefined
 ERR_PNPM_RECURSIVE_EXEC_FIRST_FAIL  Command was killed with SIGKILL (Forced termination): ember build -prod
/var/www/discourse/script/assemble_ember_build.rb:103:in `system': Command failed with exit 1: pnpm (RuntimeError)
	from /var/www/discourse/script/assemble_ember_build.rb:103:in `<main>'
Docker Manager: FAILED TO UPGRADE
#<RuntimeError: RuntimeError>
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:211:in `run'
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:112:in `upgrade'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:19:in `block in <main>'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `fork'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `<main>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/commands/runner/runner_command.rb:44:in `load'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/commands/runner/runner_command.rb:44:in `block in perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/activesupport-8.0.2/lib/active_support/execution_wrapper.rb:91:in `wrap'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/commands/runner/runner_command.rb:70:in `conditional_executor'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/commands/runner/runner_command.rb:43:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/thor-1.4.0/lib/thor/command.rb:28:in `run'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/thor-1.4.0/lib/thor/invocation.rb:127:in `invoke_command'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/command/base.rb:178:in `invoke_command'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/thor-1.4.0/lib/thor.rb:538:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/command/base.rb:73:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/command.rb:65:in `block in invoke'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/command.rb:143:in `with_argv'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/command.rb:63:in `invoke'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/railties-8.0.2/lib/rails/commands.rb:18:in `<main>'
/usr/local/lib/ruby/3.3.0/bundled_gems.rb:69:in `require'
/usr/local/lib/ruby/3.3.0/bundled_gems.rb:69:in `block (2 levels) in replace_require'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/bootsnap-1.18.6/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:30:in `require'
bin/rails:18:in `<main>'
Spinning up 1 Unicorn worker(s) that were stopped initially

Here you go. Reproduced Today.

1 Like