Cannot rebuild app

log here:

[2025-03-31T05:52:21.927771 #1]  INFO -- : > cd /var/www/discourse && if [ -f yarn.lock ]; then
  if [ -d node_modules/.pnpm ]; then
    echo "This version of Discourse uses yarn, but pnpm node_modules are preset. Cleaning up..."
    find ./node_modules ./app/assets/javascripts/*/node_modules -mindepth 1 -maxdepth 1 -exec rm -rf {} +
  fi
  su discourse -c 'yarn install --frozen-lockfile && yarn cache clean'
else
  su discourse -c 'CI=1 pnpm install --frozen-lockfile && pnpm prune'
fi
bash: line 1:   302 Killed                  CI=1 pnpm install --frozen-lockfile
I, [2025-03-31T05:52:29.299652 #1]  INFO -- : Scope: all 17 workspace projects
Lockfile is up to date, resolution step is skipped
Progress: resolved 1, reused 0, downloaded 0, added 0
Packages: +455 -114
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------------
Progress: resolved 455, reused 170, downloaded 50, added 119
Progress: resolved 455, reused 170, downloaded 187, added 257
Progress: resolved 455, reused 170, downloaded 281, added 349
Progress: resolved 455, reused 170, downloaded 285, added 357, done
.../core-js@3.33.0/node_modules/core-js postinstall$ node -e "try{require('./postinstall')}catch(e){}"
.../node_modules/@swc/core postinstall$ node postinstall.js
.../esbuild@0.24.2/node_modules/esbuild postinstall$ node install.js
.../core-js@3.33.0/node_modules/core-js postinstall: Done
.../node_modules/@swc/core postinstall: Failed



FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && if [ -f yarn.lock ]; then
  if [ -d node_modules/.pnpm ]; then
    echo "This version of Discourse uses yarn, but pnpm node_modules are preset. Cleaning up..."
    find ./node_modules ./app/assets/javascripts/*/node_modules -mindepth 1 -maxdepth 1 -exec rm -rf {} +
  fi
  su discourse -c 'yarn install --frozen-lockfile && yarn cache clean'
else
  su discourse -c 'CI=1 pnpm install --frozen-lockfile && pnpm prune'
fi failed with return #<Process::Status: pid 299 exit 137>
Location of failure: /usr/local/lib/ruby/gems/3.3.0/gems/pups-1.2.1/lib/pups/exec_command.rb:132:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"yarn", "cmd"=>["if [ -f yarn.lock ]; then\n  if [ -d node_modules/.pnpm ]; then\n    echo \"This version of Discourse uses yarn, but pnpm node_modules are preset. Cleaning up...\"\n    find ./node_modules ./app/assets/javascripts/*/node_modules -mindepth 1 -maxdepth 1 -exec rm -rf {} +\n  fi\n  su discourse -c 'yarn install --frozen-lockfile && yarn cache clean'\nelse\n  su discourse -c 'CI=1 pnpm install --frozen-lockfile && pnpm prune'\nfi"]}
bootstrap failed with exit code 137
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.

If my memory serves me, Exit 137 means out of memory.

Add more RAM and/or increase swap. You likely need at least 5GB of swap+ram.

3 Likes

               total        used        free      shared  buff/cache   available
Mem:            31Gi       4.1Gi        17Gi       3.5Mi       9.9Gi        27Gi
Swap:          8.0Gi          0B       8.0Gi

but my server mem enough.. @pfaffman

2 Likes

That’s very strange. 137 does mean that the job received a SIGKILL, which usually means out of memory.

It doesn’t make sense with that much memory, though.

2 Likes

It is strange.

Perhaps check the output of
sysctl vm.overcommit_memory

And also:

1 Like
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=24.04
DISTRIB_CODENAME=noble
DISTRIB_DESCRIPTION="Ubuntu 24.04.2 LTS"

$ uptime
 02:12:36 up 21 days,  5:25,  2 users,  load average: 0.02, 0.18, 0.26

$ df -h /
Filesystem                             Size  Used Avail Use% Mounted on
/dev/mapper/vg0--root-root--partition  492G   29G  463G   6% /

$ free
               total        used        free      shared  buff/cache   available
Mem:        32819356     4093776    19016296        3576    10185012    28725580
Swap:        8388604           0     8388604

$ swapon
NAME      TYPE SIZE USED PRIO
/swap.img file   8G   0B   -2

$ vmstat 5 5
procs -----------memory---------- ---swap-- -----io---- -system-- -------cpu-------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st gu
 2  0      0 18944824   3620 10211784    0    0     2    81 1057    1  1  0 99  0  0  0
 0  0      0 18965764   3620 10195160    0    0     0   123 8661 10955  3  6 91  0  0  0
 2  0      0 18953904   3620 10203228    0    0     0     9 3388 3559  1  1 98  0  0  0
 1  0      0 18864336   3620 10292272    0    0     3  3408 8327 10272  6  4 90  0  0  0
 9  3      0 17795380   3620 10561284    0    0  1666 33519 29310 41403 27 27 44  1  0  0

dmesg|egrep -i "memory|oom|kill"
none

$ ps auxrc
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
td20      347732  200  0.0  21408  5588 pts/1    R+   02:16   0:00 ps

$ sysctl vm.overcommit_memory
vm.overcommit_memory = 0


@Ed_S

I noticed that when the memory usage exceeds 1GB+, the process gets killed quickly.

Thanks… It’s still a bit of a mystery to me. But I would strongly recommend running with overcommit. You can set it now with

sudo sysctl vm.overcommit_memory=1

but to set it also on reboot might take a little more effort. It may make everything work, in which case making it persistent is the right thing. It looks like I did it by making a one-line file:

# cat /etc/sysctl.d/90-vm_overcommit_memory.conf 
vm.overcommit_memory=1

If that doesn’t help, it feels like some watchdog or quota is looking out for resource over-use and killing processes. Possibly a good look at the full dmesg output will help - if you’ve recently had the failure, perhaps the last 100 lines might give a clue.

1 Like
2 Likes

This topic was automatically closed after 2 days. New replies are no longer allowed.