Configure an S3 compatible object storage provider for uploads

Hey @mcwumbly. This was very easy to find when I could search for “S3 clone”. I was unable to find it just now. Was there something wrong with that title? Is there a search that will find it? Could we add a (I can’t remember what it’s called) thing so it can auto link on some words like standard install does (but I can’t think of what words to use).


As someone who links that topics multiple times a week I kinda agree :stuck_out_tongue:

Maybe adding “s3 clones” to the OP body helps the search-fu?


I’ve found “S3 compatible” more common in the wild, which is why I changed it during a sweep of updating docs titles in general, for example: MinIO | AWS S3 Compatible Object Storage

I think the suggestion to stick other search terms in the OP body makes sense though. (I just added it in this one).

1 Like

Seems fine. I guess we’ll have to change with the times. :person_shrugging:

Yes. It’s really not so hard. You can do it, @pfaffman!




Hello, has anyone managed to get Contabo Object Storage to work for S3 Compatible uploads. It seems that when uploading it prefixes the repository name in the url.

For example if you have a bucket called community it creates a URL like

I have found this behavior in Duplicati for example but it can be excluded that it prefixes the bucket name in the domain.

I would appreciate if someone has the solution to be able to use this Object Storage because it has very good prices.

I have made several tests to configure the domain as CNAME in my domain from cloudflare to provide the SSL but for the SSL certificate is no longer covered because they use a wildcard and if I deactivate the proxy of clouflare it complains because the certificate is not correct.


1 Like

Have you tried to set the S3 CDN setting to ? IMO that will work.

Not exist, its the contabo endpoint

1 Like

Yes, but what will be the final URL of a example file in a bucket?



He means that if you upload a file to the bucket yourself (using whatever tool you can get to upload a file) what url would you use to access the file?


The structure is

User: 9198f3bf2d6e43dd86fab037ebad3aee
Bucket: comunidad
File: castopod-1.png


That’s not a working url. But I guess it might be if you replace that colon with a slash?

That’s not the way you described it in your first post, so maybe now he can make another suggestion.

1 Like

So try setting


and rebuilding.


Cloudflare’s R2 is finally publicly available (it took just a year, apparently). (Here’s the original announcement:

I created a bucket.

I created a token that includes: “Edit: Allow edit access of all objects and List, Write, and Delete operations of all buckets”

Here’s what I’ve tried:

  DISCOURSE_S3_BUCKET: lc-testing
  DISCOURSE_S3_BACKUP_BUCKET: lc-testing/backups

But uploading assets fails with this:

Aws::S3::Errors::NotImplemented: Header 'x-amz-acl' with value 'public-read' not implemented

And then I remembered to make the bucket public as described at Public Buckets · Cloudflare R2 docs

But it still didn’t work.

S3 API Compatibility · Cloudflare R2 docs shows that x-amz-acl is unimplemented.

Glancing at the Discourse code, it isn’t obvious to me that it’s possible to make R2 work without changes to core.

After disabling uploads, backups work, so R2 appears to be a very cheap way to have S3 backups. But since I had made that bucket public, the backup was also public (if you can guess the filename), so if this does get figured out, you’ll want separate buckets for backups and uploads.

I removed this line and was able to see that it uploaded a file, and was able to access it using a custom domain as the s3_cdn_url. (And a similar edit to the s3 rake task allows assets to get uploaded.)


So I guess we add it as not compatible in the OP until they implement object level ACL. Thanks for trying it out!


Yeah. The required changes to core to allow it to skip setting the ACL seem pretty hairy. You could say that it’s OK for backups only. If you don’t jump through hoops to make the bucket public, it should be fine.

1 Like

The problem is the s3 endpoint

1 Like

I just tested R2, but it appears they are not respecting our “Content-Encoding” info, even tho their docs say they will. Maybe in a year it will be usable.


This needs a warning blurb added to the MinIO or general sections. We need a notice made in here that “Discourse uses DNS mode for paths on S3-compatible storage systems. If the backend only supports path-mode and not DNS mode for bucket paths, then it is not Discourse compatible.” Which is why MinIO was originally not on the list and later added.

I also need the MinIO Storage Server section updated - i need the caveat #2 to state the following:

  1. You have Domain Support enabled in the MinIO configuration for Domain-driven bucket paths. This is mandatory as Discourse does not support non-domain path-driven bucket paths with S3 storage mechanisms.

EDIT: Looks like with this post I got Member status (trust level 2) so I was able to edit the wiki post now. No action needed from moderators, even though I asked them to make the edits.


Awesome! Thanks for your help in keeping things up to date. That looks like the kind of warning I’d be happy to have.

:clinking_glasses: :palms_up_together: