I just tried to update and there is a failure near the end:
I, [2024-07-04T07:17:18.714988 #807] INFO -- : Writing /var/www/discourse/public/assets/scripts/discourse-test-listen-boot-9b14a0fc65c689577e6a428dcfd680205516fe211700a71c7adb5cbcf4df2cc5.js
rake aborted!
Zlib::BufError: buffer error (Zlib::BufError)
...
< more stuff >
...
FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && su discourse -c 'SKIP_EMBER_CLI_COMPILE=1 bundle exec rake themes:update assets:precompile' failed with return #<Process::Status: pid 805 exit 1>
Location of failure: /usr/local/lib/ruby/gems/3.3.0/gems/pups-1.2.1/lib/pups/exec_command.rb:132:in `spawn'
exec failed with the params {"cd"=>"$home", "tag"=>"precompile", "hook"=>"assets_precompile", "cmd"=>["su discourse -c 'SKIP_EMBER_CLI_COMPILE=1 bundle exec rake themes:update assets:precompile'"]}
bootstrap failed with exit code 1
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
But I already have the maxmind account ID in the yaml file.
I guess I could enter the container and re-run this command again, but I wonder in general is there a way to recover from failed rebuilds or do people just try to run the whole thing again?
Yesterday i gave up and commented out maxmind. I then went into the container and added the values to discourse.conf and it pulled the database successfully (a rather complicated work around).
I don’t understand how this could be happening, but it does look like a bug.
I think the only solution now is to do without maxmind.
So I had this issue today however, It was only once, I just issued rebuild again and it went well. so weird there is no consistent way to reproduce this. other containers with similar configuration built just fine.
I’ve rebuilt 3 sites today, and each one tripped up as above at the same time, which seemed to be just after updating the Theme Components.
Each time a 2nd rebuild ran just fine, with me changing nothing at all. I do wonder if Maxmind is a bit of a red herring (or simply a different issue).
I use a 2 container setup so a failed rebuild is no big deal.
I had the same experience. It seems to fail on the first run. Luckily I switched to a 2 container set-up so the site remains live while I run the rebuild twice.