Error during upgrading from Beta 3.1.x to latest

Doing an upgrade approx. every 2 to 3 days the last updated succeeded.

Today’s update via the /admin/upgrade URL first failed.

Then I did the usual:

cd /var/discourse
sudo git pull
sudo ./launcher rebuild app

This also resulted in an error:

/var/www/discourse/vendor/bundle/ruby/3.1.0/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => db:migrate
(See full trace by running task with --trace)
I, [2023-01-20T15:27:00.834259 #1]  INFO -- : == 20230118020114 AddPublicTopicCountToTags: migrating ========================
-- add_column(:tags, :public_topic_count, :integer, {:default=>0, :null=>false})
   -> 0.0138s
-- execute("UPDATE tags t\nSET public_topic_count = x.topic_count\nFROM (\n  SELECT\n    COUNT(topics.id) AS topic_count,\n    tags.id AS tag_id\n  FROM tags\n  INNER JOIN topic_tags ON tags.id = topic_tags.tag_id\n  INNER JOIN topics ON topics.id = topic_tags.topic_id AND topics.deleted_at IS NULL AND topics.archetype != 'private_message'\n  INNER JOIN categories ON categories.id = topics.category_id AND NOT categories.read_restricted\n  GROUP BY tags.id\n) x\nWHERE x.tag_id = t.id\nAND x.topic_count <> t.public_topic_count;\n")

I, [2023-01-20T15:27:00.834823 #1]  INFO -- : Terminating async processes
I, [2023-01-20T15:27:00.834868 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 42
I, [2023-01-20T15:27:00.835173 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 103
2023-01-20 15:27:00.835 UTC [42] LOG:  received fast shutdown request
103:signal-handler (1674228420) Received SIGTERM scheduling shutdown...
2023-01-20 15:27:00.849 UTC [42] LOG:  aborting any active transactions
2023-01-20 15:27:00.851 UTC [42] LOG:  background worker "logical replication launcher" (PID 51) exited with exit code 1
2023-01-20 15:27:00.855 UTC [46] LOG:  shutting down
2023-01-20 15:27:00.901 UTC [42] LOG:  database system is shut down
103:M 20 Jan 2023 15:27:00.905 # User requested shutdown...
103:M 20 Jan 2023 15:27:00.905 * Saving the final RDB snapshot before exiting.
103:M 20 Jan 2023 15:27:00.950 * DB saved on disk
103:M 20 Jan 2023 15:27:00.950 # Redis is now ready to exit, bye bye...


FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate' failed with return #<Process::Status: pid 390 exit 1>
Location of failure: /usr/local/lib/ruby/gems/3.1.0/gems/pups-1.1.1/lib/pups/exec_command.rb:117:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"db_migrate", "cmd"=>["su discourse -c 'bundle exec rake db:migrate'"]}
bootstrap failed with exit code 1
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.
37cbe5dc5c82e0a41e9e1deea5b99f5e643bfe6bcd53d52c38e3e855a85ed81e

Scrolling more up, I see an entry:

PG::UniqueViolation: ERROR:  duplicate key value violates unique constraint "index_tags_on_name"
DETAIL:  Key (name)=(net) already exists.

I’m really clueless on how to fix/resolve.

My forum is now offline. I’ve ran sudo ./launcher rebuild app again. It failed, too.

Could someone please help me with some suggestions on how to resolve?

Please run ./launcher start app to reboot the old image. That may bring your forum back online.

1 Like

Thanks a lot.

The container is now running again, still the error persists.

Most likely I have the same Issue as here:

I’ll not try to resolve it the same way as described there.

Update 1 - Solution

OK, it seems I was able to resolve with these steps:

  1. Edited the app.yaml and did:
  2. Rebuilt the image.
  3. Activated and started Data Explorer plugin and used this query to identify the duplicate tags.
  4. Used the regular admin GUI to identify the tags. Then renamed one tag to be unique again.
  5. Removed the pinned version again.
  6. Rebuilt Discourse sudo ./launcher rebuild app. This succeeded now.
  7. After starting Discourse, I went to the tags admin area and further worked on the duplicated (deleted one of the formerly duplicate tags after assigning the other tag).
1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.