Error on building latest version

Hi. I get the following error when running ./installer rebuild web_only on the latest releases of discoures.

fs.js:114
    throw err;
    ^

Error: ENOENT: no such file or directory, open 'root='/assets',url='/assets/vendor-4681e47c140b5a5bea2bfb1fec89365858288a8ea0c21979c0167ad9b570ee3d.js.map''
    at Object.openSync (fs.js:438:3)
    at Object.writeFileSync (fs.js:1189:35)
    at done (/usr/lib/node_modules/uglify-js/bin/uglifyjs:516:20)
    at cb (/usr/lib/node_modules/uglify-js/bin/uglifyjs:324:39)
    at /usr/lib/node_modules/uglify-js/bin/uglifyjs:391:9
    at FSReqWrap.readFileAfterClose [as oncomplete] (internal/fs/read_file_context.js:53:3)
rake aborted!
Errno::ENOENT: No such file or directory @ rb_file_s_size - /var/www/discourse/public/assets/vendor-4681e47c140b5a5bea2bfb1fec89365858288a8ea0c21979c0167ad9b570ee3d.js
/var/www/discourse/lib/tasks/assets.rake:268:in `size'
/var/www/discourse/lib/tasks/assets.rake:268:in `block (4 levels) in <top (required)>'
/var/www/discourse/lib/tasks/assets.rake:159:in `block in concurrent?'
/var/www/discourse/lib/tasks/assets.rake:259:in `block (3 levels) in <top (required)>'
/var/www/discourse/lib/tasks/assets.rake:250:in `each'
/var/www/discourse/lib/tasks/assets.rake:250:in `block (2 levels) in <top (required)>'
/var/www/discourse/lib/tasks/assets.rake:159:in `concurrent?'
/var/www/discourse/lib/tasks/assets.rake:247:in `block in <top (required)>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:23:in `load'
/usr/local/bin/bundle:23:in `<main>'
Tasks: TOP => assets:precompile
(See full trace by running task with --trace)
I, [2019-12-11T22:53:15.806396 #18]  INFO -- : Downloading MaxMindDB...
Compressing Javascript and Generating Source Maps



FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && su discourse -c 'bundle exec rake assets:precompile' failed with return #<Process::Status: pid 17151 exit 1>
Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"assets_precompile", "cmd"=>["su discourse -c 'bundle exec rake assets:precompile'"]}
f565d457b97d7ff12a258b03a456563a5a0e928c707c70e194ef88ba170aaf3a
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one

I have disabled all plugins, still no joy. Can anyone shed any light on this?

We saw this before somewhere else.

Can you try

./launcher cleanup
git pull
./launcher rebuild app

If that does not work try removing all the containers on the machine and all images.

2 Likes

Thanks for the quick response @sam . I will try that.

Quick question. The “git pull” I am assuming pulls the latest version of discourse-docker right?

In general, is it necessary to upgrade discourse-docker and discourse at the same time?

Currently, I don’t have discourse-docker controlled by git. Instead, I download a specific revision of discourse-docker (as a Zip) when configuring the server and pin the version of discourse to a specific commit during deployment of discourse.

The reason I do this is in order to try and make the build repeatable ie. running the same command with the same configuration + source at 2 different times should produce the same artifact. In general this is a good idea and has got me out of loads of ops nightmares with other software over the years :wink: This is because it gives you the ability to roll back to a known good configuration.

However, I think I’m swimming against the tide here with discourse because it seems to want to pull the latest versions of various bits of software during the build. I’m starting to wonder if my attempts to make the build repeatable are actually shooting myself in the foot?

Very true words you have entered a 100% unsupportable state with this hackery :flushed:

Recommend you pull latest from git asap

Old docker images of discourse base are incompatible with current discourse, plus they are missing many security patches

Thanks @sam. I’ve done that and I am able to build the new version now. Fortunately, this was all on beta so no harm done :slight_smile: I’m not sure it is “hackery” to want a repeatable build though :thinking:

The thing I am trying to get here is the ability to roll back to a known good version. Suppose that I run ./launcher rebuild app at time t1 and discourse works. I then run ./launcher rebuild app at time t2 and something is wrong. How do I get the software back to the previous version? I think I could live with the fact that the build is not repeatable if I could roll back to a known good state. Since the launcher has already built a working docker image at time t1, is it possible to tell the launcher to use a specific image rather than the bad one that was built at time t2?

Any ideas?

Sorry for going a bit off topic on this, I can re-post if you like.

If you want repeatable builds you got to pin your discourse version to a specific sha via your container config, and every plugin

That means you stop getting fixes to discourse, security fixes to docker image and so on, but the build will be pretty repeatable

You may need to amend templates as well to fossilise stuff and never take in security fixes to apt dependencies

2 Likes

Ok, I already pin the version of discourse to a specific commit and I can also do this for plugins. But if I don’t also pin discourse-docker, won’t this result in a situation where discourse-base gets updated on every build but discourse does not. Won’t that just result in a similar incompatibility but the other way round (because discourse-base gets in front of discourse)?

I am confused, how did this error show up if you have discourse pinned to something old?

If NGINX has a critical vulnerability do you want that fixed?

The error showed up when I moved the pinned version of discourse forward from 2.4.0.beta2 to 2.4.0beta8

Yes of course I would like to have critical vulnerabilities in dependent software systems fixed when I run a new build! Sounds Fantastic!

However, I would also like to be able to roll back in the case where the new version is broken :slight_smile:

Let me run a concrete example:

suppose my configuration is in the current state:

discourse : 2.4.0beta2 (pinned in web_only.yml)

I run ./launcher rebuild web_only and everything works.

now my system is in this state:

discourse : 2.4.0beta2
discourse-docker: LATEST-AT-TIME-T1

so now I change my configuration to this state:

discourse : 2.4.0beta8 (pinned in web_only.yml)

I run ./launcher rebuild web_only and something is broken.

My system is now in this state:

discourse : 2.4.0beta8
discourse-docker: LATEST-AT-TIME-T2

So now I want to go back to the previous version to get everything working again. So I change the pinned version of discourse to 2.4.0beta2 and rebuild. However when I run ./launcher rebuild web_only, the system is now in this state:

discourse : 2.4.0beta2
discourse-docker: LATEST-AT-TIME-T2

Although the pinned version of discourse is the same, the version of discourse-base (and the rest of discourse-docker, templates , the ./launcher itself etc) will now be different, so I won’t be rolling back to a known good state, and I would worry that it would not build at all.

Sorry if I’m being dense here, all I want to do is to have the ability to get back to safety in case there are problems during an update. Perhaps re-running ./launcher rebuild web_only is just not the right way to roll back here? For other systems that I deploy, I would just re-deploy a previous docker image? Is there a way to tell the launcher to do this?

yeah, you got to rethink your process here. Our db migrations are not reversible.

If you want to test out an upgrade without committing you need to operate in a staging sandbox

2 Likes

Yea, I understand that it is difficult to roll back a db migration. Discourse is obviously not built with my kind of release and deployment management strategy in mind. I should stop swimming against the tide here.

I have a staging environment. So I’ll just abandon any ideas of repeatable build and deployment, test on staging and then cross my fingers and toes when moving to production.

Thanks for your help @sam

2 Likes