Rasin
(Rasin)
16 أكتوبر 2019، 6:38ص
1
I was trying to restore my backup to a new server, and it showed some errors.
The log is here: Ubuntu Pastebin
The error maybe:
PG::UniqueViolation: ERROR: could not create unique index "unique_index_categories_on_slug"
DETAIL: Key (COALESCE(parent_category_id, '-1'::integer), slug)=(5, ) is duplicated.
I though it could be a DB problem, but I didn’t edit it. Just download the backup tar but restored wrongly.
Please help!
إعجاب واحد (1)
ariznaf
(fernando)
16 أكتوبر 2019، 7:02ص
2
There is another thread with similar problem.
It is a strange problem.
Many of us have solved it just by keep trying.
Try it several times and may one of them you don’t have the error.
May be your problem is different but it resembles the database errors we had while restoring.
It seems there is a bug in restoring scripts or spme kind of bug with the postgres database system used.
Rasin
(Rasin)
16 أكتوبر 2019، 7:12ص
3
How strange is this solution…FML
Will it be able to edit the sql file to make it normal?
I mean, if I there any way to ignore the duplicated key?
Rasin
(Rasin)
16 أكتوبر 2019، 7:57ص
4
And after I rebuild to a previous version or a beta version, the log has turned into:
EXCEPTION: Compression::Strategy::ExtractFailed[2019-10-16 07:53:52] /var/www/discourse/lib/compression/strategy.rb:89:in `block in extract_file'
/var/www/discourse/lib/compression/strategy.rb:85:in `open'
/var/www/discourse/lib/compression/strategy.rb:85:in `extract_file'
/var/www/discourse/lib/compression/strategy.rb:26:in `block (2 levels) in decompress'
/usr/local/lib/ruby/site_ruby/2.6.0/rubygems/package/tar_reader.rb:65:in `each'
/var/www/discourse/lib/compression/strategy.rb:18:in `block in decompress'
/var/www/discourse/lib/compression/tar.rb:26:in `get_compressed_file_stream'
/var/www/discourse/lib/compression/strategy.rb:15:in `decompress'
/var/www/discourse/lib/compression/pipeline.rb:26:in `block in decompress'
/var/www/discourse/lib/compression/pipeline.rb:24:in `each'
/var/www/discourse/lib/compression/pipeline.rb:24:in `reduce'
/var/www/discourse/lib/compression/pipeline.rb:24:in `decompress'
/var/www/discourse/lib/backup_restore/restorer.rb:141:in `decompress_archive'
/var/www/discourse/lib/backup_restore/restorer.rb:60:in `run'
/var/www/discourse/lib/backup_restore.rb:166:in `block in start!'
/var/www/discourse/lib/backup_restore.rb:163:in `fork'
/var/www/discourse/lib/backup_restore.rb:163:in `start!'
/var/www/discourse/lib/backup_restore.rb:22:in `restore!'
/var/www/discourse/app/controllers/admin/backups_controller.rb:119:in `restore'
Why is that a problem? I didn’t change anything on the tar file.
Rasin
(Rasin)
16 أكتوبر 2019، 12:46م
6
Thank you very much on this extracting thing!
Could you please help me with the duplicated key error? Or is there a way I can fix this sql on psql
?
gerhard
(Gerhard Schlager)
16 أكتوبر 2019، 1:39م
7
Rasin:
The error maybe:
PG::UniqueViolation: ERROR: could not create unique index "unique_index_categories_on_slug"
DETAIL: Key (COALESCE(parent_category_id, '-1'::integer), slug)=(5, ) is duplicated.
@daniel I think the migration in FIX: Add unique index to prevent duplicate slugs for categories · discourse/discourse@c71da3f · GitHub needs to make sure the column values are unique before creating the unique index.
إعجابَين (2)
daniel
(Daniel Waterworth)
16 أكتوبر 2019، 2:36م
8
@Rasin , This should now be fixed as of:
https://github.com/discourse/discourse/commit/7ba914f1e10d90b1e4d25b0ab2427c94df456eae
Thank you for bringing this to our attention and I’m sorry for any inconvenience. Let us know if you have any other problems.
6 إعجابات
ariznaf
(fernando)
16 أكتوبر 2019، 9:52م
9
Yes, I have told you: it is an odd thing.
But it worked for several of us as you can read here:
Wow, trying to reproduce it so I can be sure that things will go as planned on migration, and I have the error again? I’m going to continue to try and find the steps to reproduce a fix. Because I can’t reproduce it now.
I don’t know if your situation is the same, as the error is similar but not exactly the same.
@usulrasolas has comented that he edited the script with sql sentences to correct it.
But me (and others) have not change anything just tried multiple tries.
It is odd, I know, but may be there are timing or time expiration issues involved.
We don’t know.
Developers are looking at it.
إعجابَين (2)
Rasin
(Rasin)
17 أكتوبر 2019، 1:22ص
10
Thank you for all your suggestions! Looking forward to new updates.
system
(system)
تم إغلاقه في
16 نوفمبر 2019، 1:33ص
11
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.