Configure an S3 compatible object storage provider for uploads

Contabo S3 support/compatibility will be added? or someone found a walkaround to make it works?


We, the Discourse maintainers, only support AWS S3. The providers listed here were tested by either us or the community to see if they implement enough of the S3 API to be compatible with Discourse.

Per the OP, @tuxed tested Contabo and found it lacking. It’s on Contabo to evolve their implementation compliance with S3, if they deem it aligned with their business interests, not something that can do.


Is this still buggy? Why Digitalocean CDN not good?

1 Like

Did you follow the links?

It looks like the CDN doesn’t now about metadata. But you could try it an see if it works! Let us know if you do. I was wondering if it had been fixed just the other day. From the looks of the documentation, I’m not going to be trying it myself any time soon.

1 Like

I am looking for an easy way to add cdn support to my forum on digitalocean. If S3 is easier I would go with that option.

Don’t want to take risk with a setup that doesn’t worked well before.

1 Like

The recommended solution is just not to use their CDN. You can use spaces, if you follow the instructions above and something like for the CDN. It’s cheap and easy.

Aws S3 is what cdck uses, so it’s a bit better tested and supported, but unless you’re already handy with aws, the spaces bucket is a good solution. Just don’t use the digital ocean cdn.

1 Like

I just went through this - CDN setup, keeping images local for now - first with fastly, then some other I don’t recall. Settled on very easy to set up. They have how-to specific to Discourse. We are self-hosted in DO with 100GB+ of images. 65% cache hit rate and climbing.


s3 configure tombstone policy only works on ?

1 Like

No. It’s a problem only on backblaze.


3 posts were split to a new topic: Exploring Solutions for User Profile Picture Upload Problems

Whew, where do I start? So, I’m using Cloudflare for caching and DNS, and Backblaze B2 for storage. I was able to get it to work, but only partially. During a ./launcher rebuild app I saw that it was uploading assets, so I was super excited that it appeared to be working. After it completed the rebuild successfully, I was unable to access the site. I just get some moving dots in the middle of the page.

Based on the Backblaze article Deliver Public Backblaze B2 Content Through Cloudflare CDN I have set a Proxied CNAME record that points to the Friendly URL origin called gtech-cdn.

CNAME gtech-cdn →

The article also talks about Page Rules; I have tried toggling them on and off to no avail.

Here’s the pertinent configuration items:



  DISCOURSE_S3_REGION: us-west-000
  DISCOURSE_S3_BUCKET: gtech-uploads
  DISCOURSE_S3_BACKUP_BUCKET: gtech-uploads/backups

Under the **hooks:** section...

    - exec:
        cd: $home
          - sudo -E -u discourse bundle exec rake s3:upload_assets
          - sudo -E -u discourse bundle exec rake s3:expire_missing_assets

One of the things that is confusing to me is what to put in the two variables DISCOURSE_S3_CDN_URL, and DISCOURSE_CDN_URL. Do I have them set properly based on the info that I have provided?

Looking at the browser dev tools console, I’m getting 404 errors on .js scripts. The URL doesn’t appear that it is being built properly. Shouldn’t it have /file/ in there before /assets? If I add that manually to create a proper URL, it works:

Thanks for any help, it’s much appreciated!!!

1 Like doesn’t resolve, so that’s the first thing to fix. .

I’m not sure that you can use cloudflare as acdn like that, but maybe I’m wrong.

1 Like

Sorry, I should have mentioned that is a fake domain. It does respond to a ping.

As far as Cloudflare not being able to be used as a CDN, I guess I’m not following. The article I linked clearly is for using it as a CDN. If that is not true, then again, what values are to be used in the two variables DISCOURSE_S3_CDN_URL, and DISCOURSE_CDN_URL?


If you give fake urls you can only get fake answers.

Does the url serve the data that is expected? Can you retrieve it from the forum url?

I think the s3 cdn should work. It’s using the forum url for the forum cdn they I’m not sure about.

A normal cdn is a different url than the forum and the cdn can count on the data being static rather than having to guess what’s dynamic

1 Like

I do my best not to plaster my information across various forums, so please excuse my secrecy on the matter.

The forum sits at “”, which is a Cloudflare DNS record that is proxied (cached). Prior to configuring Discourse to use Backblaze it all functioned properly.”, as stated previously, resolves and also responds to a ping. The target of the CNAME record,, also resolves. That B2 Friendly URL origin is what the article instructs you to use. That isn’t the issue though. The issue is that Discourse is serving up URLs for the .js files using an invalid URL that will never work as it is missing that “/file/gtech-cdn” part of the path. If you take one of those incomplete .js URLS and add that missing info to it manually, it will load the text of the .js file just fine.

Of course, I’m still trying to wrap my head around how this is all supposed to be working with those two variables. I’m more of a visual learner and could really use a flow chart or something to help me understand what’s supposed to be happening with the interactions between Cloudflare CDN, Discourse, and Backblaze B2.

Thanks for your help.

Oh, and I’ll try to address your last sentence about a normal cdn…

The article from Backblaze has you create two page rules per bucket (in my case 1 bucket is used), which if I am understanding it correctly is sort of like a firewall rule in the way it processes.

Rule 1 says that*/* should use standard caching (which is set elsewhere in Cloudflare to 1 month)
Rule 2 redirects anything (302 - temp redir) that doesn’t match the Rule 1 pattern.

So not everything will be cached by going to… at least that is my understanding

EDIT: This did not work.
Focusing on this a little bit more, I decided for obvious reasons that I should use the S3 URL as the CNAME target instead of the Friendly URL that the Backblaze article suggested. I’m now just waiting on the DNS record TTL to expire.


Regarding this hook:

I don’t see anything with s3 in the rake --tasks dump. Is this still relevant or am I missing some plugin?

Also seeing this when I manually run:

rake aborted!
FileStore::ToS3MigrationError: Some uploads could not be migrated to the new scheme. You need to fix this manually. (FileStore::ToS3MigrationError)
/var/www/discourse/lib/file_store/to_s3_migration.rb:156:in `migrate_to_s3'
/var/www/discourse/lib/file_store/to_s3_migration.rb:59:in `migrate'
/var/www/discourse/lib/tasks/uploads.rake:126:in `migrate_to_s3'
/var/www/discourse/lib/tasks/uploads.rake:106:in `block in migrate_to_s3_all_sites'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management/null_instance.rb:36:in `each_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management.rb:21:in `each_connection'
/var/www/discourse/lib/tasks/uploads.rake:104:in `migrate_to_s3_all_sites'
/var/www/discourse/lib/tasks/uploads.rake:100:in `block in <main>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => uploads:migrate_to_s3
(See full trace by running task with --trace)
root@ubuntu-s-2vcpu-4gb-nyc2-01-app:/var/www/discourse# rake uploads:migrate_to_s3
Please note that migrating to S3 is currently not reversible! 

Is there still no development on cloudflare R2? It looked good to me…

Did you read the note about it above? It’s incompatible with gzip files, and the was no indication that there was a plan to change they. Maybe it works with everything but uploads?

Did you try it? Did it work?