Migrate_to_s3 task fails for placing too many requests

So, I’ve been fighting for a few days for uploading my locally stored uploads to DigitalOcean Spaces, and after a lot of struggling I managed to run the rake uploads:migrate_to_s3, even if so far none of my tries are being successful.

A bit of background info:

  • I’m running Discourse 2.2 beta 7 +117
  • I have successfully set-up Discourse to store uploads and backups to Spaces, and I have them working reliably since days
  • I have defined environmental variables in app.yml as explained here but I had to leave out DISCOURSE_S3_CDN_URL (whereas DISCOURSE_S3_REGION, DISCOURSE_S3_ACCESS_KEY_ID, DISCOURSE_S3_SECRET_ACCESS_KEY, DISCOURSE_S3_BUCKET are all set) otherwise this change would make Discourse, after rebuild, look for all assets (including js stuff) on Spaces, which is not good nor what I need :angel:
  • I have run the rake task for several hours a few time, but every time after a while I end up with the error below, and the script crashes.

image

Questions:

  1. How could I throttle the speed at which this script is working?
  2. Why are we depending on setting of environmental variables that can potentially brake the UI? Shouldn’t be the data we already put in the configuration via web UI enough for running the migrate_to_s3 task?
3 Likes

Back here just to report about the way I finally moved forward on this issue, hoping this can be useful for some other users while our developers work on a Discourse solution for this:

  • There was no way to complete the migration of the uploads using rake uploads:migrate_to_S3 task. I tried at least 10 times but it always failed, sooner or later. It might just be something related to DigitalOcean Spaces and not to Amazon S3 service, but the script was failing throwing the Aws::S3::Errors::SlowDown: Please reduce your request rate. exception. This tells me (but I might be wrong, in case I apologise) that this exception is not properly handled and that could be dealt with via exponential backoff.

  • I manually copied all the uploads to Spaces using the s3cmd utility, with something like s3cmd sync --skip-existing /var/discourse/shared/standalone/uploads/default/ s3://my-discourse-data-bucket --acl-public (please note that this command must be understood and adapted for your discourse set-up. Do not just throw around commands expecting magic to happen…)

  • After a few hours the copy completed and I had just to update the database. I have then once again launched the rake uploads:migrate_to_S3 task, expecting it to “see” all the copied files and just proceed with the DB updates. Unfortunately that was not the case. The number of files detected on S3/Spaces was once again just 1000. Which brings me to another thing that I could not understand: why, despite some runs of the script were lasting for longer hence copying more files over to Spaces, after a crash and re-launch of the script the number of detected files of S3/Spaces was fixed to 1000 (see picture above) as if newly copied files were just like ignored?

  • Then I decided to proceed manually also with the DB update: I read carefully the migrate_to_S3 task source code and cherry picked from the final part of it, executing instructions directly from the rails console. Specifically, I run this (once again, if you are reading this thinking to repeat what I did, please consider that this can lead to disaster if you dont’t know what you are doing and you don’t have a very recent backup of your DB, just in case.):


db = RailsMultisite::ConnectionManagement.current_db
bucket, folder = GlobalSetting.s3_bucket, "" # I have no folder on my bucket

excluded_tables = %w{
  email_logs
  incoming_emails
  notifications
  post_search_data
  search_logs
  stylesheet_cache
  user_auth_token_logs
  user_auth_tokens
  web_hooks_events
}

from = "/uploads/#{db}/original/(\\dX/(?:[a-f0-9]/)*[a-f0-9]{40}[a-z0-9\\.]*)"
to = "#{SiteSetting.Upload.s3_base_url}/#{folder}#{prefix}\\1"
DbHelper.regexp_replace(from, to, excluded_tables: excluded_tables)

from = "#{Discourse.base_url}#{SiteSetting.Upload.s3_base_url}"
to = SiteSetting.Upload.s3_cdn_url
DbHelper.remap(from, to, excluded_tables: excluded_tables)

if Discourse.asset_host.present?
	# Uploads that were on local CDN will now be on S3 CDN
    from = "#{Discourse.asset_host}#{SiteSetting.Upload.s3_base_url}"
    to = SiteSetting.Upload.s3_cdn_url
    DbHelper.remap(from, to, excluded_tables: excluded_tables)
end
  • Database update worked like a charm, and I then started rebaking my 297000 posts with rake posts:rebake, which is going to take anorher few hours.

That’ s all folks. The rebake is still on-going while I type this, but all spot checks I made have shown good results. The issue of migrating my locally stored uplink to DigitalOcean Spaces is completed, but it was quite an ordeal.

Discourse remains a superb product and with my criticism to the migrate_to_S3 script I meant no bashing, rather it wants to be an encouragement to solve some annoying issues and making it more robust.

Final note - Even though the the manual copy of my uploads to Spaces was successfully completed, I have not removed the local hosted copies, as a precaution. I’ll keep them around for some time just in case, although in the long term I’ll delete them.

4 Likes

Weird I never hit that rate limit error… Do you have lots of small files?

Also, the s3cmd sync will “break” any attachments (they will lose their file name).

In theory, it should be. But it’s not convenient to set the arguments in one place (settings UI) and use them in another (rake task).

1 Like

Yes. About 37000 “originals” and of course 3 times more in the “optimized” directory. In total about 120000.

This I do not understand, what do you mean? The uploads I copied manually to Spaces were identical in name and path placement on Spaces with those on my local disk. I made several tests before the final rebake, that just ended and was fully successful.

I see your point, but then please consider that setting environment variables might have some unwanted consequences and, at least in my experience, required to rebuild the container and broke the UI as I explained here.

1 Like