Restore fails - could not create unique index

Hi, restore of backup fails with error:

CREATE INDEX
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id"
DETAIL:  Key (path, incoming_domain_id)=(/m/search, 25) is duplicated.
EXCEPTION: psql failed: DETAIL:  Key (path, incoming_domain_id)=(/m/search, 25) is duplicated.

I found a similar topic but could not make out what needs to be done: https://meta.discourse.org/t/getting-this-error-during-restore-could-not-create-unique-index/

Appreciate any step by step help for dummies to fix.

2 Likes

This issue with the incoming_referers table has come up a few times recently. I’m not sure why that particular table is causing problems, but it seems likely that the issues are related. Maybe someone else on the Discourse team will have ideas about what could be causing the duplicate records to be created.

Do you still have access to the site that you created the backup file on? If so, the fix is to delete the duplicate record from the database and then create a new backup file. To do that, you would SSH into the old server and cd to the /var/discourse directory:

 cd /var/discourse

Then run

./launcher enter app

Then enter the Rails console with

rails c

You should then see a prompt that looks similar to this:

[1] pry(main)>

Try running the following command from the Rails console and let us know what it returns:

IncomingReferer.where(path: "/m/search")

It should return an array with two or more records.

2 Likes

Thank you.
I will run this in the morning and report back.

1 Like

This is from the old installation - looks like just one record?

[1] pry(main)> IncomingReferer.where(path: "/m/search")
=> [#<IncomingReferer:0x00005638d834b130
  id: 5153,
  path: "/m/search",
  incoming_domain_id: 25>]
[2] pry(main)>

Edit: tried in the new server as well. It shows:

[1] pry(main)> IncomingReferer.where(path: "/m/search")
=> []
[2] pry(main)>

Thanks for checking that! The result you got is actually the same as what I saw with another site earlier today. It is a solvable problem, but I’m going to try to get one of our engineers take a look at what is going on.

2 Likes

My primary purpose to move servers was since I was on Debian 8 that is going out of support.
With this issue of restore, I took the route of upgrading to Debian 9 on the same server. It has been successful, so some respite for now.
Thank you for your support.

Replace this line

You need to do a fuzzy search so that it doesn’t assume that the index works. One percent sign is likely enough if it’s at the beginning, I think.

You can just delete the extra record. To do it right, right though, you need to update the other table that links to this one. I have to look it up every time as there are a couple different tables that tie happens to.

This problem is blamed on third party extentions, which doesn’t make much sense. It seems like it must be postgres fault, but I don’t know. I come across this a couple times a month,it seems (score) across a bunch of sites).

5 Likes

I have a duplicate key issue also, is there a documented fix?

discourse=# REINDEX SCHEMA CONCURRENTLY public;
    ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_ccnew"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 1905) is duplicated.

[1] pry(main)> IncomingReferer.where(path: "/m/search")
=> [#<IncomingReferer:0x0000557176d3f210 id: 44231, path: "/m/search", incoming_domain_id: 4>,
 #<IncomingReferer:0x0000557176d925c8 id: 42228, path: "/m/search", incoming_domain_id: 26>]

Even though I just upgraded my server in place and hence won’t restore to a new server anymore, I tried this out of curiosity and did not find any records with fuzzy search:

[1] pry(main)> IncomingReferer.where(path: "%/m/search%")
=> []
[2] pry(main)> IncomingReferer.where(path: "%/m/search")
=> []
[3] pry(main)> IncomingReferer.where(path: "/m/search%")
=> []

You need to use LIKE in order for wildcards to work:

IncomingReferer.where("path LIKE '%/m/search%'")
4 Likes

That brought up quite a few more duplicate keys.

[1] pry(main)> IncomingReferer.where("path LIKE '%/m/search%'")
=> [#<IncomingReferer:0x0000557eaa7ed488 id: 408, path: "/m/search", incoming_domain_id: 26>,
 #<IncomingReferer:0x0000557eaabd80c0 id: 1508, path: "/m/search", incoming_domain_id: 45>,
 #<IncomingReferer:0x0000557eaabe3268 id: 2216, path: "/m/search", incoming_domain_id: 420>,
 #<IncomingReferer:0x0000557eaabe2f20 id: 3081, path: "/m/search", incoming_domain_id: 230>,
 #<IncomingReferer:0x0000557eaabe2c00 id: 33210, path: "/m/search", incoming_domain_id: 4>,
 #<IncomingReferer:0x0000557eaabe2908 id: 44231, path: "/m/search", incoming_domain_id: 4>,
 #<IncomingReferer:0x0000557eaabe27c8 id: 42228, path: "/m/search", incoming_domain_id: 26>]
2 Likes

I would just nuke all the dupe rows … there is little value in this information.

1 Like

Happy to do so, can you provide the correct command? Not familiar with postgres in particular but I do know SQL.

That’s good to hear. I’ve been laboriously updating the other table that links to these. It’s a huge pain since I can never remember what it was, so it’s the first time over and over again.

IncomingReferer.find(44231).destroy
IncomingReferer.find(42228).destroy
2 Likes

Removing those two duplicates was successful, but subsequently rebuilding indices threw up new errors. Is this a major problem? How do we fix, delete that search 3433 row?

[1] pry(main)> IncomingReferer.find(44231).destroy
=> #<IncomingReferer:0x000055734c65d8e8 id: 44231, path: "/m/search", incoming_domain_id: 4>
[2] pry(main)> IncomingReferer.find(42228).destroy
=> #<IncomingReferer:0x000055734cd81a70 id: 42228, path: "/m/search", incoming_domain_id: 26>
postgres=# \connect discourse
You are now connected to database "discourse" as user "postgres".
discourse=# REINDEX SCHEMA CONCURRENTLY public;
WARNING:  cannot reindex invalid index "public.incoming_referers_pkey_ccnew" concurrently, skipping
WARNING:  cannot reindex invalid index "public.index_incoming_referers_on_path_and_incoming_domain_id_ccnew" concurrently, skipping
WARNING:  cannot reindex invalid index "pg_toast.pg_toast_2782645_index_ccnew" concurrently, skipping
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_ccnew1"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 3433) is duplicated.
CONTEXT:  parallel worker
1 Like

Here’s the code handling creation… this should be handling it properly, but we could update it to an ON CONFLICT insert if needed?
https://github.com/discourse/discourse/blob/888e68a1637ca784a7bf51a6bbb524dcf7413b13/app/models/incoming_referer.rb#L11-L20

4 Likes

I tried to rebuild those 4 indices manually. Two succeeded, two failed. Should I nuke those two duplicate rows?

discourse=# REINDEX INDEX CONCURRENTLY "public"."incoming_referers_pkey_ccnew";
REINDEX
discourse=# REINDEX INDEX CONCURRENTLY "public"."index_incoming_referers_on_path_and_incoming_domain_id_ccnew";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_cc_ccnew"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 1861) is duplicated.
discourse=# REINDEX INDEX CONCURRENTLY "pg_toast"."pg_toast_2782645_index_ccnew";
REINDEX
discourse=# REINDEX INDEX CONCURRENTLY "index_incoming_referers_on_path_and_incoming_domain_id_ccnew1";
ERROR:  could not create unique index "index_incoming_referers_on_path_and_incoming_domain_id_c_ccnew1"
DETAIL:  Key (path, incoming_domain_id)=(/search/, 1905) is duplicated.
1 Like

Yes please nuke the dupe rows

@riking pg corrupting indexes is a pg bug, not a discourse bug, we can certainly improve the performance of that insert, but the pg bug is something that needs fixing in pg

My guess is that is something todo with some sort of rude shutdown of the db engine, maybe on power loss

2 Likes

That’s a reasonable explanation. Does ./launcher shutdown app (or rebuild) do a clean shutdown or postgres somehow? Oh, but I bet that an unattended upgrade doesn’t know how to do a clean shutdown of docker containers, does it?