Configure an S3 compatible object storage provider for uploads

@Falco - It could be good to add a warning for Scaleway, that it only supports 1,000 parts for multipart upload, while AWS supports 10,000. This is not a problem for regular uploads, but it is an issue for backup uploads over a certain size as the S3 SDK will use 10,000 parts unless manually modified and fail.


Great finding! Please add it to the OP wiki if you can.


Thank you, also I would to add that you can use any of these tools to copy from cloud to cloud, especially to/from S3 compatible object storage, for example, Rclone, Shargate, Gs Richcopy360 and GoodSync. all of these are compatible with similar clouds

1 Like

We have just discovered a problem, Cloudflare R2 doesn’t allow public-read from the S3 endpoint URL, rather only the custom domain or a random domain.
(Pre-signed downloads work, just no direct public access is supported.)
But discourse only uses the CDN URL for embedded images, and not direct downloads, which use the S3 endpoint URL.
Is there a way to make it use the CDN URL for all files, or force the use of a presigned url?


The workaround mentioned in that post works, add ?dl=1 fixes it, because its forces discourse to use a presigned S3 URL.

1 Like

Fixed in 2023-03-16 now R2 working with Discourse like charm with free plan


I also see this with some frequency (every several months) even though my Discourse is running in AWS Lightsail and I’m uploading to AWS S3. So I’m not sure it’s wasabi’s fault.

Would it be possible to catch this error and alert the admin? I do check the disk space and remove old backups when I upgrade but sometimes that’s too late and the forum goes down for no disk space.

1 Like

I am fairly certain that the issue was that automatic OS reboots for security updates were happening while the backup was running. Make sure that you schedule your OS reboots and your backups at different times. It was after I’d moved that site from wasabi that I came up with this explanation, but I’m pretty sure that’s what it was.


uptime says it’s been up for 300 days so I don’t think that’s the problem. But along similar lines, I had Discourse backups scheduled at 2:00 am and Lightsail snapshots at 2:30 am, so maybe the upload sometimes isn’t complete and the snapshot messes with it. I’ve separated the two operations by an hour- we’ll see if it makes a difference.

Regardless I think it’s reasonable to warn admins if the upload fails, for whatever reason.


I think that it’s time that you do kernel upgrades and reboot. :slight_smile:

Could you be running out of ram?

After implementing remote Backblaze backups, I see this error in my dashboard:

The server is configured to upload files to S3, but there is no S3 CDN configured. This can lead to expensive S3 costs and slower site performance. See “Using Object Storage for Uploads” to learn more.

I didn’t configure uploading of files, I only configured backups via this config:

DISCOURSE_S3_BUCKET: community-forum
DISCOURSE_S3_BACKUP_BUCKET: community-forum/backups

Did I do something wrong?

Something seems misconfigured, I notice when I try uploading a file to a post, I received this error:

Unsupported value for canned acl 'public-read’

Any assistance would be appreciated.


Remove this if you don’t want uploads to go to s3.


You saved the day brother. :+1:t3: Thanks so much!


Did that seem to work?

1 Like

It has happened once in the past month since I separated the two processes by an hour, so it didn’t “fix” it, and it doesn’t happen often enough to say whether it helped.

On the bright side, I noticed there is a backup status section on the admin page that shows available disk space, which saves me from constantly opening a terminal and doing a df just to check for stuck backups. I customized the text to remind myself that I expect around 80 GB free.


1 Like

That’s a good idea.

I saw the image before I read that you had customized the text and was wondering what logic was at play to determine that was “good”!

1 Like

I just couldn’t get this to work with Scaleway using the Bitnami Discourse image.
The env variables were set but clearly weren’t being read/applied correctly (or at all?).

So I’ve set the S3 variables in the admin panel and set the region directly in the rails console (still hoping that this just becomes a text field):

It gave me a validation error, but I just commented out the validation check before updating the setting, then put it in again.

1 Like

The Bitnami image isn’t packaged by us and don’t follow our recommendations. Everything documented here is only tested against the official install.


This has been solved by enabling “s3 use cdn url for all uploads”, an option recently added by discourse.
since we were using R2 before, we needed to use discourse remap to manually replace the broken links, and synced s3 files just in case, and then we rebaked all posts.

1 Like

I’m trying to set this up with idrive e2, which is s3 compatible. However I’m getting a not very helpful error/stack trace at the end of ./launcher rebuild app:

I, [2023-10-14T15:08:08.026184 #1]  INFO -- : > cd /var/www/discourse && sudo -E -u discourse bundle exec rake s3:upload_assets
rake aborted!
Aws::S3::Errors::InternalError: We encountered an internal error, please try again.
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/sse_cpk.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/dualstack.rb:27:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/accelerate.rb:56:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/checksum_algorithm.rb:111:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:22:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/request_callback.rb:71:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/response_paging.rb:12:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/response_target.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/request.rb:72:in `send_request'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/client.rb:12369:in `put_object'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/object.rb:1472:in `put'
/var/www/discourse/lib/s3_helper.rb:78:in `upload'
/var/www/discourse/lib/tasks/s3.rake:41:in `block in upload'
/var/www/discourse/lib/tasks/s3.rake:41:in `open'
/var/www/discourse/lib/tasks/s3.rake:41:in `upload'
/var/www/discourse/lib/tasks/s3.rake:197:in `block (2 levels) in <main>'
/var/www/discourse/lib/tasks/s3.rake:197:in `each'
/var/www/discourse/lib/tasks/s3.rake:197:in `block in <main>'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => s3:upload_assets
(See full trace by running task with --trace)
I, [2023-10-14T15:08:16.413098 #1]  INFO -- : Installing CORS rules...
Uploading: assets/admin-2ebebf57104b0beb47a1c82fe5a8c6decd07f60a706640345fed296a094d1536.js

This is the config I’ve been using, but I’ve also tried it with DISCOURSE_S3_CONFIGURE_TOMBSTONE_POLICY and DISCOURSE_S3_HTTP_CONTINUE_TIMEOUT

  DISCOURSE_S3_BUCKET: discourse

Note I’m not using it for backups (that’s already setup in the UI with backblaze) nor DISCOURSE_CDN_URL because I’m not sure idrive supports that - I planned on experimenting with that once I got some actual files in the bucket.

1 Like

Looks like it’s not compatible enough with S3 for Discourse needs.

If you want to dig further the next step would be reproducing this in a development install and getting the exact API call that fails.