2.7.0.beta2 upgrade failed

Hi, I just received an email from discourse, proposing to upgrade from 2.7.0.beta1 to 2.7.0.beta2.
I went to the admin page and clicked on upgrade, and it seems like something has failed.
As a result, it is not possible to access my dicourse instance anymore (Error 500).

I include here the final part of the log. The full log was too long and therefore I could not include it here.
Instead, you can access it through pastebin.

The instance is hosted on a 2GB memory, 50GB hard disk digital ocean droplet. Nothing special was configured at this discourse instance. In fact, nobody even ever sent a single message on this instance.

brotli -f --quality=6 /var/www/discourse/public/assets/locales/zh_TW-09cd38bbba5770af30c208be36b0763fb1db74e336db84185e8a173201f7548e.js --output=/var/www/discourse/public/assets/locales/zh_TW-09cd38bbba5770af30c208be36b0763fb1db74e336db84185e8a173201f7548e.js.br


Done compressing locales/zh_TW-09cd38bbba5770af30c208be36b0763fb1db74e336db84185e8a173201f7548e.js : 0.11 secs

5746.556775825 Compressing: locales/sv-dfd441e5e9497b2361a61cc46e5f3491508ad05a9d84ef0a2c17dac10890fd24.js
gzip -f -c -9 /var/www/discourse/public/assets/locales/sv-dfd441e5e9497b2361a61cc46e5f3491508ad05a9d84ef0a2c17dac10890fd24.js > /var/www/discourse/public/assets/locales/sv-dfd441e5e9497b2361a61cc46e5f3491508ad05a9d84ef0a2c17dac10890fd24.js.gz

brotli -f --quality=6 /var/www/discourse/public/assets/locales/sv-dfd441e5e9497b2361a61cc46e5f3491508ad05a9d84ef0a2c17dac10890fd24.js --output=/var/www/discourse/public/assets/locales/sv-dfd441e5e9497b2361a61cc46e5f3491508ad05a9d84ef0a2c17dac10890fd24.js.br


Done compressing locales/sv-dfd441e5e9497b2361a61cc46e5f3491508ad05a9d84ef0a2c17dac10890fd24.js : 0.11 secs

5746.662857966 Compressing: locales/sl-97d2fc2eec6a4603afbd6466d84b4281605561c943f4b70c52d8b6874a54acef.js
gzip -f -c -9 /var/www/discourse/public/assets/locales/sl-97d2fc2eec6a4603afbd6466d84b4281605561c943f4b70c52d8b6874a54acef.js > /var/www/discourse/public/assets/locales/sl-97d2fc2eec6a4603afbd6466d84b4281605561c943f4b70c52d8b6874a54acef.js.gz

brotli -f --quality=6 /var/www/discourse/public/assets/locales/sl-97d2fc2eec6a4603afbd6466d84b4281605561c943f4b70c52d8b6874a54acef.js --output=/var/www/discourse/public/assets/locales/sl-97d2fc2eec6a4603afbd6466d84b4281605561c943f4b70c52d8b6874a54acef.js.br


Done compressing locales/sl-97d2fc2eec6a4603afbd6466d84b4281605561c943f4b70c52d8b6874a54acef.js : 0.1 secs

5746.764039922 Compressing: locales/hy-afe58e4f81b01be42710b51b1eb32d913a9a77fb35efd5f197144d7113693a04.js
gzip -f -c -9 /var/www/discourse/public/assets/locales/hy-afe58e4f81b01be42710b51b1eb32d913a9a77fb35efd5f197144d7113693a04.js > /var/www/discourse/public/assets/locales/hy-afe58e4f81b01be42710b51b1eb32d913a9a77fb35efd5f197144d7113693a04.js.gz

brotli -f --quality=6 /var/www/discourse/public/assets/locales/hy-afe58e4f81b01be42710b51b1eb32d913a9a77fb35efd5f197144d7113693a04.js --output=/var/www/discourse/public/assets/locales/hy-afe58e4f81b01be42710b51b1eb32d913a9a77fb35efd5f197144d7113693a04.js.br


Done compressing locales/hy-afe58e4f81b01be42710b51b1eb32d913a9a77fb35efd5f197144d7113693a04.js : 0.14 secs

5746.902258561 Compressing: locales/da-2c6e181ef146930e8baa63c4ffe80df59414b8de019ee19058aa4ee1dcd88280.js
gzip -f -c -9 /var/www/discourse/public/assets/locales/da-2c6e181ef146930e8baa63c4ffe80df59414b8de019ee19058aa4ee1dcd88280.js > /var/www/discourse/public/assets/locales/da-2c6e181ef146930e8baa63c4ffe80df59414b8de019ee19058aa4ee1dcd88280.js.gz

brotli -f --quality=6 /var/www/discourse/public/assets/locales/da-2c6e181ef146930e8baa63c4ffe80df59414b8de019ee19058aa4ee1dcd88280.js --output=/var/www/discourse/public/assets/locales/da-2c6e181ef146930e8baa63c4ffe80df59414b8de019ee19058aa4ee1dcd88280.js.br


Done compressing locales/da-2c6e181ef146930e8baa63c4ffe80df59414b8de019ee19058aa4ee1dcd88280.js : 0.11 secs

5747.007671073 Compressing: locales/te-9740a00eaeb5b1140e0042391528339d963a5a043a8edae6ca33d4e939d50133.js
gzip -f -c -9 /var/www/discourse/public/assets/locales/te-9740a00eaeb5b1140e0042391528339d963a5a043a8edae6ca33d4e939d50133.js > /var/www/discourse/public/assets/locales/te-9740a00eaeb5b1140e0042391528339d963a5a043a8edae6ca33d4e939d50133.js.gz

brotli -f --quality=6 /var/www/discourse/public/assets/locales/te-9740a00eaeb5b1140e0042391528339d963a5a043a8edae6ca33d4e939d50133.js --output=/var/www/discourse/public/assets/locales/te-9740a00eaeb5b1140e0042391528339d963a5a043a8edae6ca33d4e939d50133.js.br


Done compressing locales/te-9740a00eaeb5b1140e0042391528339d963a5a043a8edae6ca33d4e939d50133.js : 0.15 secs

5747.159975235 Compressing: locales/ko-0c03d0523d94c0739085171c60657b22b01c8eb0b2b2bb690e7fb3422b756e17.js
gzip -f -c -9 /var/www/discourse/public/assets/locales/ko-0c03d0523d94c0739085171c60657b22b01c8eb0b2b2bb690e7fb3422b756e17.js > /var/www/discourse/public/assets/locales/ko-0c03d0523d94c0739085171c60657b22b01c8eb0b2b2bb690e7fb3422b756e17.js.gz

brotli -f --quality=6 /var/www/discourse/public/assets/locales/ko-0c03d0523d94c0739085171c60657b22b01c8eb0b2b2bb690e7fb3422b756e17.js --output=/var/www/discourse/public/assets/locales/ko-0c03d0523d94c0739085171c60657b22b01c8eb0b2b2bb690e7fb3422b756e17.js.br


Done compressing locales/ko-0c03d0523d94c0739085171c60657b22b01c8eb0b2b2bb690e7fb3422b756e17.js : 0.12 secs

Skipping: plugins/discourse-local-dates-85c0a52c5a0ee4c69ce0a55fb5c6047c7fd2c12f0437b843240bb9ea3d4457b1.js already compressed
Skipping: plugins/discourse-narrative-bot-d88c63e1a6fadc2e6371b706e54750b554e3ee890061223c9af0f8feeb89915a.js already compressed
Skipping: plugins/discourse-presence-da4864123e624ace0b06153a5b9e6b600e5d5b6d6c28ada211bb7ec50894a66c.js already compressed
Skipping: plugins/poll-91a566fa78da0bffec70d7c8923ac79757032168b646e8c84d921d9810789bb1.js already compressed
Skipping: application-bd6ed652347208302845f7e2be3f2d2dbbbb72be7df2c0e46c18422a61188ff0.js already compressed
5747.281046973 Compressing: vendor-b631d4ab0775fdbe453aa2158e18dc41826d0ba619e5f2731e5b9fa4c458af99.js
uglifyjs '/var/www/discourse/public/assets/_vendor-b631d4ab0775fdbe453aa2158e18dc41826d0ba619e5f2731e5b9fa4c458af99.js' -m -c -o '/var/www/discourse/public/assets/vendor-b631d4ab0775fdbe453aa2158e18dc41826d0ba619e5f2731e5b9fa4c458af99.js' --source-map "base='/var/www/discourse/public/assets',root='/assets',url='/assets/vendor-b631d4ab0775fdbe453aa2158e18dc41826d0ba619e5f2731e5b9fa4c458af99.js.map'"
Parse error at _vendor-b631d4ab0775fdbe453aa2158e18dc41826d0ba619e5f2731e5b9fa4c458af99.js:1850,34
        return Handlebars.compile(...arguments);
                                  ^
ERROR: Unexpected token: punc Ā«.Ā»
    at JS_Parse_Error.get (eval at <anonymous> (/usr/lib/node_modules/uglify-js/tools/node.js:18:1), <anonymous>:71:23)
    at fatal (/usr/lib/node_modules/uglify-js/bin/uglifyjs:332:27)
    at run (/usr/lib/node_modules/uglify-js/bin/uglifyjs:275:9)
    at Object.<anonymous> (/usr/lib/node_modules/uglify-js/bin/uglifyjs:190:5)
    at Module._compile (internal/modules/cjs/loader.js:778:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
    at Function.Module._load (internal/modules/cjs/loader.js:585:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:831:12)
rake aborted!
Errno::ENOENT: No such file or directory @ rb_file_s_size - /var/www/discourse/public/assets/vendor-b631d4ab0775fdbe453aa2158e18dc41826d0ba619e5f2731e5b9fa4c458af99.js
/var/www/discourse/lib/tasks/assets.rake:287:in `size'
/var/www/discourse/lib/tasks/assets.rake:287:in `block (4 levels) in <main>'
/var/www/discourse/lib/tasks/assets.rake:178:in `block in concurrent?'
/var/www/discourse/lib/tasks/assets.rake:278:in `block (3 levels) in <main>'
/var/www/discourse/lib/tasks/assets.rake:269:in `each'
/var/www/discourse/lib/tasks/assets.rake:269:in `block (2 levels) in <main>'
/var/www/discourse/lib/tasks/assets.rake:178:in `concurrent?'
/var/www/discourse/lib/tasks/assets.rake:266:in `block in <main>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/rake-13.0.3/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:23:in `load'
/usr/local/bin/bundle:23:in `<main>'
Tasks: TOP => assets:precompile
(See full trace by running task with --trace)
Docker Manager: FAILED TO UPGRADE
#<RuntimeError: RuntimeError>
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:178:in `run'
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:86:in `upgrade'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:19:in `block in <main>'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `fork'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `<main>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:59:in `load'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:59:in `load'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/commands/runner/runner_command.rb:42:in `perform'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/command/base.rb:69:in `perform'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/command.rb:46:in `invoke'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/commands.rb:18:in `<main>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.5.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/activesupport-6.0.3.3/lib/active_support/dependencies.rb:324:in `block in require'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/activesupport-6.0.3.3/lib/active_support/dependencies.rb:291:in `load_dependency'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/activesupport-6.0.3.3/lib/active_support/dependencies.rb:324:in `require'
bin/rails:17:in `<main>'
Spinning up 3 Unicorn worker(s) that were stopped initially

Do you have any non standard plugins installed?

You did a

 ./launcher rebuild app 

At the command line?

3 Likes

On my side, I dont have any non-standard plugins and I launched this on the command line. The first timeI was still getting the error.

I then restarted my server, rebuilt the appā€¦ and it works.

Hi, thanks for the quick reply!
I have a vanilla installation, never changed anything except for clicking on the ā€œupgradeā€ button from time to time. This is why I considered what happened to my discourse instance to be very surprising.
I didnā€™t try a ./launcher rebuild app, will try and report the result.

My upgrade to 2.7.0.Beta2 cannot be even started - click on the one-click upgrade link in the email announcing this upgrade

Hooray, a new version of Discourse is available!

Your version: 2.7.0.beta1
New version: 2.7.0.beta2

Upgrade using our easy one-click browser upgrade

See whatā€™s new in the release notes or view the raw GitHub changelog

Visit meta.discourse.org for news, discussion, and support for Discourse

results with

why is this happening? I am supposed to run the upgrade from the console logged in the Digital Ocean droplet hoting discourse?

Youā€™ll need to upgrade from the command line. There is a new docker base image.

./launcher rebuild app

The announcement says that, but itā€™s not clear from the standard upgrade email.

2 Likes

Iā€™ve just upgraded to 2.7.0.Beta2
You have to do this particular upgrade from the command line.

cd /var/discourse
git pull
 ./launcher rebuild app
2 Likes

My apologies for this asynchronous discussion - I started my question and had to suddenly leave my office. To prevent losing my unfinished message, I saved it - and it gave you the opportunity to answer my question before I fully posted it :wink:

The DO console would work if you donā€™t use a terminal.

The buttons in the screenshot are initially disabled forcing you to upgrade Docker first.

An aside:You could have left your reply as it saves automatically as a draft.

Spoke too soon

Thanks, @geoff777 - it all makes sense now

Trying to run

cd /var/discourse
git pull
 ./launcher rebuild app

The rebuild app script failed two times in the row, at the precise same place:

This cannot be random networking interference. Please advise

1 Like

In my case rebuild stucked at plugin discourse-vk-auth. Currently my instance works without this plugin.

gem install rrule -v 0.4.2 -i /var/www/discourse/plugins/discourse-calendar/gems/2.7.2 --no-document --ignore-dependencies --no-user-install
Successfully installed rrule-0.4.2
1 gem installed
gem install omniauth-vkontakte -v 1.6.1 -i /var/www/discourse/plugins/discourse-vk-auth/gems/2.7.2 --no-document --ignore-dependencies --no-user-install
Successfully installed omniauth-vkontakte-1.6.1
1 gem installed

I, [2021-01-22T17:13:51.391038 #1]  INFO -- : > cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate'
rake aborted!
Gem::ConflictError: Unable to activate omniauth-vkontakte-1.6.1, because omniauth-oauth2-1.7.1 conflicts with omniauth-oauth2 (>= 1.5, <= 1.7.0)
1 Like

Iā€™ve also run into trouble with that vk plugin in previous releases. My recommendation is to remove it, plus paste the error log to that pluginā€™s topic on the forum so it can be addressed. :+1:

2 Likes

Hard to say from the screenshot. Do you have enough disk space?

You have edited both your previous posts after I have interacted.

The second time I ā€˜likedā€™ a positive post but you edited it and now itā€™s a post about a problem.
Which apparently I like?

My apologies @geoff777 - without much thinking I reclassified your ā€œlikeā€ comment (my intent was to keep the number of my posts small, thus lowering the chance of miscommunication)


I believe that I have enough space:

 System information as of Fri Jan 22 20:56:56 UTC 2021

  System load:  0.02               Users logged in:          0
  Usage of /:   39.7% of 24.06GB   IPv4 address for docker0: 172.17.0.1
  Memory usage: 50%                IPv4 address for eth0:    xxx.xxx.xxx.xxx
  Swap usage:   1%                 IPv4 address for eth0:    
  Processes:    107                IPv4 address for eth1:    

However, I suspect that my problem has to do with the Digital Ocean console - it times out very quickly , so it is possible that my update succeeded and I am just not aware of that. Will contact DO support and report my findings back here.

Thanks.

You can check on your forumā€™s dashboard that you have successfullu upgraded.
I hope it worked.

Your wish / hope @geoff777 did not help. I tried to login - and the discourse server was not responding.

I decided to run discourse-doctor from the DO console, launched from PuTTY tool (I am running on a Windows 10 machine), - and my console stopped at the same place.

Note this beginning of this run: - app not running!

root@discourse-server:/var/discourse# ./discourse-doctor
DISCOURSE DOCTOR Fri Jan 22 22:14:45 UTC 2021
OS: Linux discourse-server 5.4.0-62-generic #70-Ubuntu SMP Tue Jan 12 12:45:47 U              TC 2021 x86_64 x86_64 x86_64 GNU/Linux


Found containers/app.yml

==================== YML SETTINGS ====================
DISCOURSE_HOSTNAME=forum.congral.tech
SMTP_ADDRESS=smtp.mailgun.org
DEVELOPER_EMAILS=admin@congral.com
SMTP_PASSWORD=3a22be2a4ba5ce9b0865199dc7083871-xxxxxx
SMTP_PORT=587
SMTP_USER_NAME=postmaster@forum.congral.tech
LETSENCRYPT_ACCOUNT_EMAIL=nikolaj.ivancic@congral.com

==================== DOCKER INFO ====================
DOCKER VERSION: Docker version 20.10.2, build 2291f61

DOCKER PROCESSES (docker ps -a)

CONTAINER ID   IMAGE                              COMMAND                  CREAT              ED          STATUS                      PORTS     NAMES
4e0150995f6a   discourse/base:2.0.20201221-2020   "/bin/bash -c 'cd /pā€¦"   16 mi              nutes ago   Exited (1) 14 minutes ago             mystifying_fermat
271aff6b3bce   discourse/base:2.0.20201221-2020   "/bin/bash -c 'cd /pā€¦"   5 hou              rs ago      Exited (1) 5 hours ago                modest_brown
30ed32bab133   discourse/base:2.0.20201221-2020   "/bin/bash -c 'cd /pā€¦"   5 hou              rs ago      Exited (1) 5 hours ago                laughing_lalande
add2d921333a   local_discourse/app                "/sbin/boot"             2 wee              ks ago      Exited (5) 5 hours ago                app

==================== SERIOUS PROBLEM!!!! ====================
app not running!
Attempting to rebuild
==================== REBUILD LOG ====================
Ensuring launcher is up to date
Fetching origin
Launcher is up-to-date
Stopping old container
+ /usr/bin/docker stop -t 60 app
...

Here is the complete log up to the point of failure, saved in my Github repository, to save the space here.

The fact that I repeated this upgrade several times and after each failure (reported to me as a ā€œconsole network failureā€) it is obvious that this upgrade kills the existing discourse instance.

Please advise. I am happy to pass you my certs in case you want to run this yourselves

Both times you got that PuTTY Fatal Error?

That seems like a putty problem, though I canā€™t imagine why.

Yes, @pfaffman I do get that same error. If the execution of the ./discourse-doctor causes some catastrophic failure is it not possible that this failure results with the PuTTY fatal error - at least for my own (remote) view of his failure?

It does not seem very likely, but I will create a support ticket for DO, hoping that they might have a better view of this problem.

I guess I world try the digital ocean console next (actually, I would use a terminal in Ubuntu, but thatā€™s not what Iā€™d recommend to you).

Since yesterday, I observed a lot of weird behavior. Before sharing them here, please do let me know if continuing this thread is useful for anyone (the alternative is that I am just stepping on my own dā€¦ and this is all a waste of everyoneā€™s time). I found that:

  • resetting the root password (from DO control panel) resulted in the ability to use digital oceanā€™s console (as @pfaffman suggested above)
  • next I run the discourse-doctor in that console and it found nothing wrong (before that resetting event) the https://forum.congral.tech failed to respond - now all works just fine)
  • All of my attempts to upgrade discourse (like this one) failed multiple times (the PuTTY console showing Network Error as the reason), and today I can verify that the upgrade failed:
content="Discourse 2.7.0.beta1 - https://github.com/discourse/discourse version 1cf92310456fb6e6424f6b532770461c56378d53"

Changing the root password and then using digital oceanā€™s console is a significant change that might be of interest to Discourse team to understand better. Should I continue digging and sharing my findings here, @pfaffman ?