Upgrade failure with PG::InternalError

I got a blank page when following the upgrade link, so I tried upgrading manually with:

  cd  /var/discourse
  git pull
  ./launcher rebuild app

This failed with the following error message:

RuntimeError: cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate' failed with return #<Process::Status: pid 292 exit 1>
Location of failure: /pups/lib/pups/exec_command.rb:105:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"bundle_exec", "cmd"=>["su discourse -c 'bundle install --deployment --verbose --without test --without development'", "su discourse -c 'bundle exec rake db:migrate'", "su discourse -c 'bundle exec rake assets:precompile'"]}

There is also this message higher up:

I, [2015-05-29T18:57:39.834094 #38]  INFO -- : > cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate'
2015-05-29 18:57:52 UTC [302-1] discourse@discourse ERROR:  catalog is missing 2 attribute(s) for relid 16563
2015-05-29 18:57:52 UTC [302-2] discourse@discourse STATEMENT:  CREATE  INDEX  "index_topics_on_pinned_globally" ON "topics"  ("pinned_globally") WHERE pinned_globally
rake aborted!
StandardError: An error has occurred, this and all later migrations canceled:

PG::InternalError: ERROR:  catalog is missing 2 attribute(s) for relid 16563

Could somebody help me with this please? Thanks!

Hmm, I kind of wonder if that’s a database integrity problem.

This is our standard Docker based install, yes?

Yes I followed the 30 min installation from your blog

Hmm, I just updated discourse.codinghorror.com and talk.commonmark.org using the web updater (they are also Docker based installs on digital ocean) and no issues.

  • Is this on Digital Ocean or somewhere else?

  • Do you know roughly from what version you upgraded, was it recent or very old?

Yes, it is on Digital Ocean, and the previous version was 1.2.0.beta8

If you can share credentials privately with @techapj from our team, he can assist. Can you private message him the information necessary?

Yes, okay, I will do that. Thanks for your help!

I looked into this issue and it was due to corrupted Postgres database.

I would recommend everyone to enable “backup daily” site setting.