We recently went through this process, and there are a couple of ways to do it, the safe and slow way, using the
migrate_to_s3 rake task:
I think you could go ahead and turn on s3 uploads (there are a number of guides), and possible ssh into your container and run this task, and you wouldn’t experience downtime.
We didn’t go this route because it takes ~15 seconds per upload, and this was going to take days. We were doing this as part of a host migration, and couldn’t have a downtime of that long.
The quick and dirty route is as follows:
- Enable S3 Uploads on your site
- Back up your site with images, and download the archive.
- Unzip the the archive, and navigate to
uploads sub-folder, and upload the images to s3, using the aws-cli:
uploads aws s3 cp . s3://<your-s3-bucket> --recursive --acl public-read
- We then need to remap all of the references to the public uploads folder to the new location in s3. At the console in your docker container:
root@dc53d70f611c:/var/www/discourse# discourse remap /uploads/default/ //<your-s3-bucket>.s3.amazonaws.com/ Rewriting all occurences of /uploads/default/ to //<your-s3-bucket>.s3.amazonaws.com/ THIS TASK WILL REWRITE DATA, ARE YOU SURE (type YES) YES Remapping ar_internal_metadata key 0 rows affected! Remapping ar_internal_metadata value ... many more rows
We did the above during a scheduled maintenance window, and the site was in read only mode with a fresh backup on hand, so it was pretty low risk. I’m not sure I’d be comfortable doing it any other way, but it took less than an hour.