PostgreSQL 12 update

:warning: WARNING! If your database is very large, you will need a lot of extra disk space (2x database size) and should be very careful with this upgrade!

We just landed the long waited PostgreSQL major version upgrade. Any site admins rebuilding Discourse from the command line will be upgraded to PostgreSQL 12 from the old PostgreSQL 10.

We are running this new version for a while on Meta, and everything is working fine. PostgreSQL 12 brings lots of improvements that will be automatically leveraged by Discourse.

Updating

Official Install Guide (single container)

On you next rebuild, you will see this message at the end:

Upgrade Complete
----------------

To complete the upgrade, rebuild again using:

./launcher rebuild app
-------------------------------------------------------------------------------------

That means everything went well in the upgrade! You just need to issue a new rebuild to get your site back and running.

Data Container Install

If you are running a setup with a dedicated data container based in the sample supplied in our discourse_docker repository, you want to be sure you are shutting down PostgreSQL in a safe and clean way.

Nowadays, we have background jobs running queries spanning several minutes, so shutting down the web container will help the data container be shutdown safely.

./launcher stop web_only
./launcher stop data
./launcher rebuild data
./launcher rebuild data
./launcher rebuild web_only

Before issuing the first rebuild to the data container, you can tail the PostgreSQL log to see if it was shutdown properly.

Running a tail -f shared/data/log/var-log/postgres/current should give you the following log if it was clean:

2020-05-13 18:33:33.457 UTC [36] LOG:  received smart shutdown request
2020-05-13 18:33:33.464 UTC [36] LOG:  worker process: logical replication launcher (PID 52) exited with exit code 1
2020-05-13 18:33:33.465 UTC [47] LOG:  shutting down
2020-05-13 18:33:33.479 UTC [36] LOG:  database system is shut down

Doing a manual update / space constrained environments

:warning::warning::warning:
YOU MUST BACKUP THE POSTGRES_DATA BEFORE TRYING THIS
:warning::warning::warning:

If you are in a constrained space environment without any way to get more space you can try the following:

./launcher stop app #(or both web_only and data if that is your case)
mkdir -p /var/discourse/shared/standalone/postgres_data_new
docker run --rm \
	-v /var/discourse/shared/standalone/postgres_data:/var/lib/postgresql/10/data \
	-v /var/discourse/shared/standalone/postgres_data_new:/var/lib/postgresql/12/data \
	tianon/postgres-upgrade:10-to-12
mv /var/discourse/shared/standalone/postgres_data /var/discourse/shared/standalone/postgres_data_old
mv /var/discourse/shared/standalone/postgres_data_new /var/discourse/shared/standalone/postgres_data
./launcher rebuild app #(or first data and then web_only if that is your case)

On my tests this procedure requires less than 1x your current database size in free space.

Postponing the update

If you need to postpone the update during your next rebuild, you can swap the PostgreSQL template on your app.yml file by changing "templates/postgres.template.yml" to "templates/postgres.10.template.yml".

This is not recommended, as some site admins will forget to revert the change afterwards.

Post update optional tasks

Optimizing PostgreSQL statistics

After the update, the new PostgreSQL won’t have table statistics on hand. You can generate those using:

cd /var/discourse
./launcher enter app
su postgres
psql
\connect discourse
VACUUM VERBOSE ANALYZE;
\q
exit
exit

Re-indexing your database

One of the highlights of PostgreSQL 12 is concurrent re-indexing, which means you can do this while you site is running. We were able to reduce the Meta database size by 20% doing this.

Just keep in mind that this operation uses quite a bit of resources, so I recommend doing this on the off hours, while your traffic is low.

cd /var/discourse
./launcher enter app
su postgres
psql
\connect discourse
REINDEX SCHEMA CONCURRENTLY public;
\q
exit
exit

Cleaning up old data

You can delete the old data in PG10 format by removing the following folder:

cd /var/discourse
rm -rf  shared/standalone/postgres_data_old
58 Likes

Also, a little extra piece of info, I do get a lot of “Database is in Recovery Mode” errors from time to time. Read a very old post and comparing to commits from that time it does seem that some changes were made during that time so I don’t think the origin is the same.

1 Like

I posted a list of commands to fix the broken upgrade here: Failed to rebuild (stable) - pgsql upgrade failed

However, I would like to fully revert to the previous stable 2.4.2, including docker image and everything. I can revert to pgsql 10 by changing to "templates/postgres.10.template.yml" and importing the sql dump backup that I did, but I would like a full revert. Can someone point out what I need to do? The images I have are:

# docker images
REPOSITORY                        TAG                 IMAGE ID            CREATED             SIZE
local_discourse/app               latest              02df1113b660        43 minutes ago      2.65GB
<none>                            <none>              4e6ce054c2d6        About an hour ago   2.65GB
<none>                            <none>              9febb602d32e        2 hours ago         2.67GB
discourse/base                    2.0.20200512-1735   991acdba0b1f        8 days ago          2.22GB
<none>                            <none>              661ce13e632d        2 months ago        2.46GB
discourse/base                    2.0.20200220-2221   175da912c6da        3 months ago        2.2GB
nabo.codimd.dev/hackmdio/hackmd   1.4.1               bd83dbd9ac9a        5 months ago        1.4GB
postgres                          11.5                5f1485c70c9a        7 months ago        293MB
discourse/base                    2.0.20190906-0522   6da16759d83d        8 months ago        2.29GB

It was the upgrade from 2.4.2 (when I did a “…/launcher rebuild app”) that triggered a new docker image to be installed (I saw it in the console log output).

I don’t know how future proof I’ll be if I remain in this new docker image but using "templates/postgres.10.template.yml". It’s an important forum, which is why I stayed on “stable”. I really wish the devs didn’t mess with the “stable” channel.

1 Like

There’s typically no downgrade method without restoring a backup. You would need to trawl through the commit history between versions to determine what changed.

Upgrades include database migrations and earlier versions won’t play nicely.

2 Likes

The update was done without any problem and so easy; thanks to the great work done in the background of Discourse :v:. thanks.

after the update, the visit count (loged in users, anons, crawler) in the admin panel dropped to zero. all the other statistics are ok. is this because of this? :point_down:

p.s.: I haven’t done post update tasks yet.

3 Likes

I got an error when running this command:

How to solve this? I have users with username Xxx and XXX in the database.

1 Like

Hi everyone,

Having performed this upgrade our search doesn’t really work properly at all and returns:

image

Similar to @some-user when trying to reindex, the following errors occur:

discourse=# REINDEX SCHEMA CONCURRENTLY public;
WARNING:  cannot reindex invalid index "public.plugin_store_rows_pkey_ccnew" concurrently, skipping
WARNING:  cannot reindex invalid index "public.index_plugin_store_rows_on_plugin_name_and_key_ccnew" concurrently, skipping
WARNING:  cannot reindex invalid index "pg_toast.pg_toast_18583_index_ccnew" concurrently, skipping
ERROR:  could not create unique index "index_plugin_store_rows_on_plugin_name_and_key_ccnew1"
DETAIL:  Key (plugin_name, key)=(discourse-data-explorer, q:-2) is duplicated.
discourse=#

Any help or advice would be very well received as this issue is beyond my technical skillset.

Thanks.

3 Likes

Danke for the great teamwork and upgrade.

We are not ready to upgrade to PG 12; but we need to rebuild for some installed plugins.

What exactly must we do to insure that when we run either of these:

./launcher rebuild app   or
./launcher bootstrap app

that our scripts will not try to download and overwrite any of our existing /var/discourse/templates/* and also will not pull any new version of Discourse?

We simple want to build our current application as is with all the current setup for now, not pull down anything new cool features, and keep our status quo until a later month.

Kindly advise us .

4 Likes

Interestingly, if I try and rebuild the index again, I get a slightly different result:

ERROR:  could not create unique index "index_plugin_store_rows_on_plugin_name_and_key_ccnew2"
DETAIL:  Key (plugin_name, key)=(discourse-data-explorer, q:-2) is duplicated.

Note the ccnew2 where before, it was ccnew1

Does that help? Or just complicate things further?

Also, since my previous post 10 minutes ago, the search now appears to be working again somehow? :thinking:

1 Like

for the “key duplicated error” I’ve already seen some other support topics, check this for an example, it may help:

2 Likes

You need to somehow delete the row with the duplicate plugin_store_rows key.

Try this, after ./launcher enter app then rails c:

rows = PluginStoreRow.where("key LIKE '%q:-2'")
# validate there are two results from data explorer
rows.second.destroy!

After you’ve done that, if you want to fix the “invalid indeed” issues, the best I can think of is to backup the site and restore on a new cloud server, because something funky is going on there.

4 Likes

The discourse/base images are downloaded from Docker hub and are used as a base layer when you locally bootstrap your Docker image that actually is going to be run, the local_discourse/app.

If you have a backup of your old local_discourse/app image, I believe you can run this one together with a correspondingly old “shared/standalone” tree file backup. Whether there is more “state” in the shared/standalone tree other than the Postgres DB and “uploads” included in the standard backup files (in case you only have these) that actually matters I am not sure.

Beware that if you run ./launcher to run your image, the version of launcher (from the discourse_docker git repo) could possibly be an issue, although I believe the “start” action should be pretty much backwards compatible. launcher does hardcode a certain version of the discourse/base image that it interacts with (and downloads if needed). So maybe roll back your discourse_docker repo as well to be sure, and check out a new branch. I was surprised to see launcher itself updating the same git repository that it was run from, a no-go in my opinion for any proper tool. It seems e.g. checking out a new branch prevents this.

Be sure to make backups of everything before trying out!

1 Like

Thanks @tormod - I’ll give it a go later. If you don’t mind a PM then I may pick your head about specific issues. I noticed that ./launcher is checking out master from discourse/discourse_docker (it was that which triggered the update to the pgsql templates). It would be nice if I could force it to stay on a specific revision in that repo. Just setting params: version: tags/v2.4.2 didn’t prevent ./launcher to download a new docker image and update templates. I was under the impression that keeping params: version: stable would keep thinks in check but clearly that doesn’t apply to what ./launcher does to the templates and docker images.

1 Like

Please keep discussion in this topic about upgrading Discourse instances to PostgreSQL 12, and resolving issues related to that.

Any issues with the organization of Meta should be brought up elsewhere. Public discussion belongs in #site-feedback. If you need to talk to the team privately, send a PM to the moderators group.

9 Likes

Couple of questions regarding the update:

  1. I ended up doing a migration to a new, fresh install. This would mean that the install already has PG12. Since the data was imported with a Backup Restore, should I do the “Optimizing PostgreSQL Statistics” and the “Re-indexing your database” optional tasks?

  2. Since the upgrade/migration I’m unable to rebuild the Data container with “immediate” shutdowns of the process:

     2020-05-24 18:30:57.049 UTC [195] discourse@discourse LOG:  duration: 129.861 ms  statement: SELECT p.topic_id, p.post_number
             FROM post_actions pa
             JOIN posts p ON pa.post_id = p.id
             WHERE p.deleted_at IS NULL AND pa.deleted_at IS NULL AND
                pa.post_action_type_id = 1 AND
                pa.user_id = 194 AND
                p.topic_id IN (65880,65992,65932,66061,66056,65943,4800,66055,66053,66063,66062,53067,62120,54005,35250,60599,61313,348,65990,66026,51415,66024,66010,64944,65178                                   ,56940,66047,63312,65997,50240)
             ORDER BY p.topic_id, p.post_number
    
     2020-05-24 18:30:58.876 UTC [186] discourse@discourse LOG:  duration: 811.803 ms  statement: SELECT
                        u.id AS user_id,
                        topics.id AS topic_id,
                        topics.created_at,
                        highest_post_number,
                        last_read_post_number,
                        c.id AS category_id,
                        tu.notification_level
             FROM topics
             JOIN users u on u.id = 398
             JOIN user_stats AS us ON us.user_id = u.id
             JOIN user_options AS uo ON uo.user_id = u.id
             JOIN categories c ON c.id = topics.category_id
             LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
             LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 398
             WHERE u.id = 398 AND
    
                   topics.archetype <> 'private_message' AND
                   ((1=0) OR ("topics"."deleted_at" IS NULL AND topics.created_at >= GREATEST(CASE
                               WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
                               WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(u.previous_visit_at,u.created_at)
                               ELSE ('2020-05-24 18:30:58.064163'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
                            END, us.new_since, '2020-05-17 09:29:44') AND tu.last_read_post_number IS NULL AND COALESCE(tu.notification_level, 2) >= 2 AND topics.created_at > '2                                   020-05-17 09:29:44' AND (category_users.last_seen_at IS NULL OR topics.created_at > category_users.last_seen_at))) AND
                   (topics.visible ) AND
    
                   topics.deleted_at IS NULL AND
                   (
              NOT c.read_restricted  OR c.id IN (
                 SELECT c2.id FROM categories c2
                 JOIN category_groups cg ON cg.category_id = c2.id
                 JOIN group_users gu ON gu.user_id = 398 AND cg.group_id = gu.group_id
                 WHERE c2.read_restricted )
             ) AND
    
                   NOT (
                     last_read_post_number IS NULL AND
                     COALESCE(category_users.notification_level, 1) = 0
                   )
    
             UNION ALL
    
             SELECT
                        u.id AS user_id,
                        topics.id AS topic_id,
                        topics.created_at,
                        highest_post_number,
                        last_read_post_number,
                        c.id AS category_id,
                        tu.notification_level
             FROM topics
             JOIN users u on u.id = 398
             JOIN user_stats AS us ON us.user_id = u.id
             JOIN user_options AS uo ON uo.user_id = u.id
             JOIN categories c ON c.id = topics.category_id
             LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
             LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 398
             WHERE u.id = 398 AND
                    topics.updated_at >= us.first_unread_at AND
                   topics.archetype <> 'private_message' AND
                   (("topics"."deleted_at" IS NULL AND tu.last_read_post_number < topics.highest_post_number AND COALESCE(tu.notification_level, 1) >= 2) OR (1=0)) AND
                   (topics.visible ) AND
    
                   topics.deleted_at IS NULL AND
                   (
              NOT c.read_restricted  OR c.id IN (
                 SELECT c2.id FROM categories c2
                 JOIN category_groups cg ON cg.category_id = c2.id
                 JOIN group_users gu ON gu.user_id = 398 AND cg.group_id = gu.group_id
                 WHERE c2.read_restricted )
             ) AND
    
                   NOT (
                     last_read_post_number IS NULL AND
                     COALESCE(category_users.notification_level, 1) = 0
                   )
    
     2020-05-24 18:30:59.564 UTC [191] discourse@discourse LOG:  duration: 3068.919 ms  statement: SELECT
                        u.id AS user_id,
                        topics.id AS topic_id,
                        topics.created_at,
                        highest_staff_post_number highest_post_number,
                        last_read_post_number,
                        c.id AS category_id,
                        tu.notification_level
             FROM topics
             JOIN users u on u.id = 3
             JOIN user_stats AS us ON us.user_id = u.id
             JOIN user_options AS uo ON uo.user_id = u.id
             JOIN categories c ON c.id = topics.category_id
             LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
             LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 3
             WHERE u.id = 3 AND
    
                   topics.archetype <> 'private_message' AND
                   ((1=0) OR ("topics"."deleted_at" IS NULL AND topics.created_at >= GREATEST(CASE
                               WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -1 THEN u.created_at
                               WHEN COALESCE(uo.new_topic_duration_minutes, 2880) = -2 THEN COALESCE(u.previous_visit_at,u.created_at)
                               ELSE ('2020-05-24 18:30:56.495315'::timestamp - INTERVAL '1 MINUTE' * COALESCE(uo.new_topic_duration_minutes, 2880))
                            END, us.new_since, '2020-05-17 09:29:44') AND tu.last_read_post_number IS NULL AND COALESCE(tu.notification_level, 2) >= 2 AND topics.created_at > '2                                   020-05-17 09:29:44' AND (category_users.last_seen_at IS NULL OR topics.created_at > category_users.last_seen_at))) AND
    
    
                   topics.deleted_at IS NULL AND
                   (
              NOT c.read_restricted  OR c.id IN (
                 SELECT c2.id FROM categories c2
                 JOIN category_groups cg ON cg.category_id = c2.id
                 JOIN group_users gu ON gu.user_id = 3 AND cg.group_id = gu.group_id
                 WHERE c2.read_restricted )
             ) AND
    
                   NOT (
                     last_read_post_number IS NULL AND
                     COALESCE(category_users.notification_level, 1) = 0
                   )
    
             UNION ALL
    
             SELECT
                        u.id AS user_id,
                        topics.id AS topic_id,
                        topics.created_at,
                        highest_staff_post_number highest_post_number,
                        last_read_post_number,
                        c.id AS category_id,
                        tu.notification_level
             FROM topics
             JOIN users u on u.id = 3
             JOIN user_stats AS us ON us.user_id = u.id
             JOIN user_options AS uo ON uo.user_id = u.id
             JOIN categories c ON c.id = topics.category_id
             LEFT JOIN topic_users tu ON tu.topic_id = topics.id AND tu.user_id = u.id
             LEFT JOIN category_users ON category_users.category_id = topics.category_id AND category_users.user_id = 3
             WHERE u.id = 3 AND
                    topics.updated_at >= us.first_unread_at AND
                   topics.archetype <> 'private_message' AND
                   (("topics"."deleted_at" IS NULL AND tu.last_read_post_number < topics.highest_staff_post_number AND COALESCE(tu.notification_level, 1) >= 2) OR (1=0)) AND
    
    
                   topics.deleted_at IS NULL AND
                   (
              NOT c.read_restricted  OR c.id IN (
                 SELECT c2.id FROM categories c2
                 JOIN category_groups cg ON cg.category_id = c2.id
                 JOIN group_users gu ON gu.user_id = 3 AND cg.group_id = gu.group_id
                 WHERE c2.read_restricted )
             ) AND
    
                   NOT (
                     last_read_post_number IS NULL AND
                     COALESCE(category_users.notification_level, 1) = 0
                   )
    
     I, [2020-05-24T18:31:00.886223 #1]  INFO -- :
     I, [2020-05-24T18:31:00.886719 #1]  INFO -- : Terminating async processes
     I, [2020-05-24T18:31:00.887079 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /                                   usr/lib/postgresql/12/bin/postmaster -D /etc/postgresql/12/main pid: 49
     I, [2020-05-24T18:31:00.887336 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 178
     178:signal-handler (1590345060) Received SIGTERM scheduling shutdown...
     2020-05-24 18:31:00.887 UTC [49] LOG:  received fast shutdown request
     2020-05-24 18:31:00.895 UTC [49] LOG:  aborting any active transactions
     2020-05-24 18:31:00.896 UTC [194] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.896 UTC [191] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.896 UTC [188] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.896 UTC [185] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.896 UTC [192] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.897 UTC [202] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.897 UTC [204] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.897 UTC [201] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.897 UTC [195] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.897 UTC [196] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.896 UTC [203] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.897 UTC [193] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.912 UTC [49] LOG:  background worker "logical replication launcher" (PID 58) exited with exit code 1
     2020-05-24 18:31:00.912 UTC [187] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.916 UTC [186] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.917 UTC [184] discourse@discourse FATAL:  terminating connection due to administrator command
     2020-05-24 18:31:00.917 UTC [184] discourse@discourse STATEMENT:  SELECT "user_auth_tokens".* FROM "user_auth_tokens" WHERE ((auth_token = '11X66XxxXXXX4XXxXx4Xxx85x1x=' OR
                                       prev_auth_token = '11X66XxxXXXX4XXxXx4Xxx85x1x=') AND rotated_at > '2020-03-25 18:31:00.894320') LIMIT 1
     2020-05-24 18:31:00.943 UTC [213] discourse@discourse FATAL:  the database system is shutting down
     2020-05-24 18:31:00.949 UTC [53] LOG:  shutting down
     2020-05-24 18:31:00.949 UTC [214] discourse@discourse FATAL:  the database system is shutting down
     178:M 24 May 2020 18:31:00.959 # User requested shutdown...
     178:M 24 May 2020 18:31:00.960 * Saving the final RDB snapshot before exiting.
     2020-05-24 18:31:01.215 UTC [49] LOG:  database system is shut down
     178:M 24 May 2020 18:31:03.127 * DB saved on disk
     178:M 24 May 2020 18:31:03.127 # Redis is now ready to exit, bye bye...
    

I never had anything like this. Don’t know if it is related to the PG12 upgrade, any ideas? (Everything works, besides an issue with Oneboxes not working on some stuff, like YouTube), but I’m worried that if I need to rebuild the Data container I just can’t.

1 Like

Are you stopping the web_only container before?

Statistics should be done after a restore.

4 Likes

@riking I have a similar problem with a number of indices post-upgrade. Can you help out with the proper commands to remove duplicates?

discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_ccnew";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew6"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 3433) is duplicated.
CONTEXT:  parallel worker
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_ccnew1";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew7"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 1905) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_cc_ccnew";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew8"
DETAIL:  Key (path, incoming_domain_id)=(/m/search, 26) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew1";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_c__ccnew"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 1905) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew2";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew9"
DETAIL:  Key (path, incoming_domain_id)=(/m/search, 26) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew3";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id__ccnew10"
DETAIL:  Key (path, incoming_domain_id)=(/m/search, 26) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew4";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id__ccnew11"
DETAIL:  Key (path, incoming_domain_id)=(/m/search, 26) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew5";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id__ccnew12"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 3433) is duplicated.
CONTEXT:  parallel worker
2 Likes
records = IncomingReferer.where("path LIKE '%/search/'", incoming_domain_id: 3433)
1 Like

Thanks @Falco!

For that log, no, I did not. But I though I wasn’t supposed to since when I try to rebuild with the web_only container stopped I get this:

178:M 24 May 2020 23:25:45.123 * DB loaded from disk: 1.428 seconds
178:M 24 May 2020 23:25:45.123 * Ready to accept connections
I, [2020-05-24T23:25:53.698482 #1]  INFO -- :
I, [2020-05-24T23:25:53.698768 #1]  INFO -- : Terminating async processes
I, [2020-05-24T23:25:53.698844 #1]  INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/12/bin/postmaster -D /etc/postgresql/12/main pid: 49
I, [2020-05-24T23:25:53.699251 #1]  INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 178
2020-05-24 23:25:53.699 UTC [49] LOG:  received fast shutdown request
178:signal-handler (1590362753) Received SIGTERM scheduling shutdown...
2020-05-24 23:25:53.702 UTC [49] LOG:  aborting any active transactions
2020-05-24 23:25:53.704 UTC [49] LOG:  background worker "logical replication launcher" (PID 58) exited with exit code 1
2020-05-24 23:25:53.706 UTC [53] LOG:  shutting down
2020-05-24 23:25:53.736 UTC [49] LOG:  database system is shut down
178:M 24 May 2020 23:25:53.749 # User requested shutdown...
178:M 24 May 2020 23:25:53.749 * Saving the final RDB snapshot before exiting.
178:M 24 May 2020 23:25:55.832 * DB saved on disk
178:M 24 May 2020 23:25:55.832 # Redis is now ready to exit, bye bye...
sha256:d44542d52f78be2c1d39d29c4078776baa7cafcf8a245f77e4bf8ce441520e6e
73a2b1ace101727095f4a278b428dd0cba71ea7be2763b068f57a1a04d4bfcd7
Removing old container
+ /usr/bin/docker rm data
data
1 Like

That looks like a successful rebuild :thinking:. What is the problem?

1 Like