Those steps are indeed correct, I did a few dry runs for these migrations today and even through they run successfully, the images are not rendered in the posts, even after a rebake. I’d recommend holding off on the migration till this is resolved @Woodcock
Whenever I run
rake uploads:migrate_from_s3, the urls are correct and images exist at those urls.
The post does not load the images but the composer preview loads the images successfully
This is seen before and after rebaking all posts.
Here’s an example url after migrating from s3:
I would use:
- Put the site in read-only mode
- Run the above tutorial
- Update site settings to Digital Ocean Spaces
- Disable read-only
- Remap URLs from S3 to DO Spaces
This will be way faster than the s3 -> local and then local -> s3.
My question is how do you do this step?
Edit: I dug around and found the remap command is probably what I need…
So in my case, the remap should look like this if I’m not mistaken.
and then I’m thinking a rebake would be in order after this, or is this unnecessary?
FYI remap does not work because it only affects data in the post which does not contain the url. So, back to my question of how do you remap urls from s3 to spaces? Thanks guys!
remap should work, it performs a deep and complete substitution on every single column in the database. As long as you know what from → to is you will be fine. The rebake at the end helps cause then it re-CDNifys your cooked markdown so you can allow for a CDN change as well if needed.
I think that’s the problem though, I need to change the cooked url. I tried to change a single image using this command:
0 posts remapped!
It seems the cooked url is not stored in the database, am I correct on that assumption?
Not at all, it is definitely stored in the DB… I would try a
discourse db:remap which is the more atomic version.
rake posts:remap works on
cooked. I think a
rake posts:rebake would have worked, but using
discourse db:remap is probably the better idea.
Forgive my ignorance, I tried this and it says
Could not find command "db:remap[https://n.....
Is this the correct way to run it?
cd /var/discourse ./launcher enter app discourse db:remap["find","replace"]
You can see an example of db:remap here
Thanks @sam, I finally got around to this and I’m still having trouble (of course). I ran these two commands as a test:
discourse remap 'https://npn-ndfapda.netdna-ssl.com/optimized/2X/1/1b7d579a96ff36bf113430403a56d8d013a80aec_1_666x500.jpeg' 'https://npn.sfo2.cdn.digitaloceanspaces.com/optimized/2X/1/1b7d579a96ff36bf113430403a56d8d013a80aec_1_666x500.jpeg' discourse remap 'https://npn-ndfapda.netdna-ssl.com/original/2X/1/1b7d579a96ff36bf113430403a56d8d013a80aec_1_666x500.jpeg' 'https://npn.sfo2.cdn.digitaloceanspaces.com/original/2X/1/1b7d579a96ff36bf113430403a56d8d013a80aec_1_666x500.jpeg'
For each command it did report success:
Remapping posts cooked 1 rows affected!
But, as you can see in this post, the first image is not displaying, which is the url I tried to remap. It’s still the same url despite the remap.
In another post I tried to rebuild the html after doing this and it changes the first part of the url to
https://nature-photographers-network.s3.dualstack.us-west-1.amazonaws.com which is my old s3 bucket which is not even in my settings anymore…
I just set up DigitalOcean Spaces on my Discourse following the above instruction. I was able to upload images to Spaces but not to back up. From the backup logs it seems like the backup location has not been changed:
[2018-12-01 06:46:28] EXCEPTION: /var/www/discourse/lib/backup_restore/backuper.rb:250:in `create_archive': Failed to gzip archive. gzip: /var/www/discourse/public/backups/default/leasehackr-forum-2018-12-01-063452-v20181129094518.tar.gz: No space left on device
My backup setup is the following:
What am I missing? Thank you!
It’s right there in the error message. Your local disk is full.
Yes my local disk is full, that’s why I wanted to move to DO Spaces, which I just created. Is it possible?
On this issue, I think that the use of environment variables of kind DISCOURSE_S3_* in order to make the
rake uploads:migrate_to_s3 task work, is bringing some unwanted consequences.
They seem to take precedence and duplicating the exact same settings one can enter in the website settings page, but also changing the link of any other asset (including JS stuff) to the provided CDN URL.
All this just to be able to move uplink files from a local server to S3/DigitalOcean Spaces? Seems a bit of an overkill, or a bug.
Se the ongoing discussion here.