Setting up file and image uploads to S3

So, you want to use S3 to handle image uploads? Here’s the definitive guide:

S3 registration

Head over to and click on Create a Free Account

During the create account process, make sure you provide payment information, otherwise you won’t be able to use S3. There’s no registration fee, you will only be charged for what you use, if you exceed the AWS Free Usage Tier.


You don’t need to create the S3 buckets manually. Discourse will automagically create them for you if they do not exist. :wink:

However, if you really want to create the S3 buckets yourself, please pay attention to the following notes:

  • The bucket name should not contain periods as this will cause huge HTTPS problems for you.

  • When you set up the permissions, make sure that you allow public ACLs, otherwise uploads will fail.

User creation

Creating a user account

Sign in to AWS Management Console and search for the “IAM” service to access the AWS Identity and Access Management (IAM) console which enables you to manage access to your AWS resources.

We need to create a user account, so click on the Users link on the left side and then the Add user button. Type in a descriptive user name and make sure the “Programmatic access” checkbox is checked.

Here’s the critical step: Make sure you either download the credentials or you copy and paste somewhere safe both Access key ID and Secret access key values. We will need them later.

Setting permissions

Once the user is created, we need to configure the user’s permission. Select the user you’ve just created in the upper panel, click on the Permissions tab in the lower panel and then click the Add inline policy link.

Click on the JSON tab and use the following piece of code as a template for your policy document:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Resource": [
       "Effect": "Allow",
       "Action": [
       "Resource": "*"

First, some warnings about your bucket name:

Make sure you replace both occurrences of “name-of-your-bucket” with the name of the bucket you will use for your Discourse instance before applying the policy.

Discourse configuration

Now that you’ve properly set up S3, the final step is to configure your Discourse forum. Make sure you’re logged in with an administrator account and go the Settings section in the admin panel.

Type in “S3” in the textbox on the right to display only the relevant settings:

You will need to:

  • Check the “enable s3 uploads” checkbox to activate the feature
  • Paste in both “Access Key Id” and “Secret Access Key” in their respective text fields
  • Enter the name of the bucket you’ve authorized in the “s3 upload bucket

You need to append a prefix to the bucket name if you want to use the same bucket for uploads and backups.

Examples of valid bucket settings
  1. Different buckets

    • s3_upload_bucket: name-of-your-upload-bucket
    • s3_backup_bucket: name-of-your-backup-bucket
  2. Different prefixes

    • s3_upload_bucket: name-of-your-bucket/uploads
    • s3_backup_bucket: name-of-your-bucket/backups
  3. Prefix for backups

    • s3_upload_bucket: name-of-your-bucket
    • s3_backup_bucket: name-of-your-bucket/backups

The “s3_region” setting is optional and defaults to “US East (N. Virginia)”. You should enter the location (eg. “EU (Frankfurt)”) that is nearest to your users for better performance. If you created the bucket manually, you’ll need to select the region you selected during the creation process.


That’s it. From now on, all your images will be uploaded to and served from S3.


Do you want store backups of your Discourse forum on S3 as well? Take a look at Configure automatic backups for Discourse.

Frequently Asked Questions

I reused the same bucket for uploads and backups and now backups aren’t working. What should I do?

The easiest solution is to append a path to the s3_backup_bucket. Here’s an example of how your settings should look afterwards.

  • s3_upload_bucket: my-bucket
  • s3_backup_bucket: my-bucket/backups

You can use the S3 Console to move existing backups into the new folder.

Do I really need to use separate buckets for uploads and backups?

No, you don’t, but it’s usually the easiest way to set-up. Essentially you need to either use two different buckets or a prefix for the backup bucket. For example, the following combinations will work:

  1. Different buckets

    • s3_upload_bucket: my-upload-bucket
    • s3_backup_bucket: my-backup-bucket
  2. Different prefixes

    • s3_upload_bucket: my-bucket/uploads
    • s3_backup_bucket: my-bucket/backups
  3. Prefix for backups (not recommended unless you previously reused the same bucket – see above question)

    • s3_upload_bucket: my-bucket
    • s3_backup_bucket: my-bucket/backups

Note that the free usage tier only lasts for 12 months, so either don’t forget about it and leave it active, or create an alert in AWS to have it notify you when it charges you money.


Hitting the like button wasn’t enough, I just have to saw this is awesome. Over at (which is down right now :disappointed:) people have uploaded tens of thousands of files and it’s unmanageable with old style forums.

So thank you thank you thank you.

1 Like

Just curious, what makes it unmanageable?

1 Like

On SMF there’s a bunch of reasons:

  • All files are uploaded into a single directory
  • Files cannot be moved via the OS without breaking things
  • Filenames are generated

To me this seems like a sensible constraint, only other 2 options are

  • Store file hashes and run recovery jobs that figure out where files really are based on a hash and a full scan of the filesystem.
  • Store attachements in the db, which is a world of pain.
1 Like

I agree, I probably wasn’t specific enough. SMF doesn’t really allow you to create your own directory structure. If you write a script to move a file and update the database, it still won’t find it. There’s a bunch of hardcoded crap in the code.

Our design should be safe for moving the install, our tables really should only store relative locations.

I followed these instructions to the letter and I’m getting “Sorry, there was an error uploading that file. Please try again.” every time.

Anyone got any ideas?

Have you got any errors in the logs?

1 Like

There are some errors in production_errors.log, Amazon returns 403 “The request signature we calculated does not match the signature you provided. Check your key and signing method.” It sounds like I put in the keys wrong, but I’ve checked that…

What’s your S3 user policy?

1 Like


  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": [

Sidenote: how do you get syntax highlighting on code blocks (like in the op?)

Did you figure it out?

The highlighting engine is unfortunately not smart enought to detect that your code block is javascript. You can force it using GitHub’s fenced code blocks.

1 Like

I have the same “signature” problem.

Edit: I just changed the keys, and it went away. Maybe I had a trailing space? @haiku would you check yours?

1 Like

Faced a problem below…

Installed latest discourse today. Setup S3 using the guide above but when creating new topic with image upload, it still loads from local /uploads folder.

So I checked Amazon S3 console, there was no bucket created.
Tried creating bucket manually, but same issue.
Checked keys such that there is no trailing space.
Checked logs, there was no errors relating to S3.

How can I debug this?

Discourse Version:
Git Version: a1b501c3fba126a3bc1705bea69a6397196b396e

It is working now, guess it was some sort of delay/cache?

Does anyone have any suggestions about how to migrate file system stored images into S3?

If I need to write a script to make it happen that’s cool (and I’d be happy to share it), just figured I’d ask here before diving into it.


I would also like some kind of howto for this.

Might want to add to the howto a recommendation to not use dots in the bucket name. This is allowed by AWS but prevents you from referencing the bucket under SSL.

I came a cropper over this one earlier:


Very good point, done in the code and here too. We’ve had 5-6 reports of bucket problems due to periods in the name, something we definitely want to avoid in the future.