How to Configure Cloudflare R2 for your Discourse Community

Cloudflare R2 buckets can be used to store static assets like images and GIFs for the Discourse community, but they cannot be used to store community backups!

Introduction:

Cloudflare R2 object storage can be used as an alternative to Amazon S3 for storing uploads for your Discourse forum. The following steps outline how to configure this.

Configuration Steps:

  1. Enable S3 uploads: Check the box to enable S3 uploads in your Discourse settings.
  2. S3 access key ID: Enter the API key ID for your R2 storage bucket. This is the ID provided when you created an API token for your bucket.
  3. Secret access key: Enter the secret key that was provided when you created the API token granting access to your storage bucket. Important: This secret key is only displayed once, so make sure to back it up securely.
  4. S3 region: You can enter any region, it doesn’t matter for R2.
  5. S3 upload bucket: Enter the name of your R2 storage bucket.
  6. S3 endpoint: Enter the S3 API link for your R2 bucket, which is in the format of https://xxxxxx.com. Refer to the Cloudflare R2 dashboard to find this link.
  7. S3 CDN URL: Enter the public R2.dev storage bucket URL for your bucket. This will also be found in your Cloudflare R2 dashboard.

Completion:

Once these settings are configured, your Discourse forum will be set up to use Cloudflare R2 for storage.

Free Tier Information:

Cloudflare’s R2 service provides a free tier that includes 10 GB of storage, 1 million uploads, and 1 million read operations per month.

2 Likes

I recommend that you follow the examples in Configure an S3 compatible object storage provider for uploads and put the settings in your yml file rather than in the database.

Thank you for your feedback. I have carefully read the guide previously, and I believe the advice regarding Cloudflare R2 is incorrect. The article suggests that the Discourse community does not support Cloudflare R2 buckets. However, in reality, Cloudflare R2 is highly compatible with S3 and can perfectly handle image and file uploads and downloads for the Discourse community. This has been verified through practical application on my community (starorigin.net).

And I suspect that was true when it was written.

It’s much better to put the S3 settings in the yml file than configure them via the UX and store them in the database. Have you tried restoring your database to a new server?

Once you’ve set things up the recommended way, you can edit that topic or make a comment and ask someone else to.

You’re right, I use a Cloudflare R2 storage bucket to store my community’s images, GIFs, and other resources. This greatly reduces the load on the community server and speeds up page loading.

I haven’t set up automatic backups for my community to be stored in the Cloudflare R2 storage bucket because Cloudflare R2 buckets do not support storing compressed files. However, Cloudflare R2 storage can store the community’s PDFs, images, GIFs, and other static resources, which is also very good.

Funny. I thought I’d used R2 for backups before. But perhaps I’m not remembering correctly.

You can still follow the recommended instructions and make a note not to put backups there.

Thank you for the reminder, I will highlight this part.

Cloudflare R2 buckets can be used to store static assets like images and GIFs for the Discourse community, but they cannot be used to store community backups!

Just to update this post I had some gotchas that needed to be included before cloudflare worked for me.


1. Region


This wasn’t true, I had to use “auto” or the region I selected, auto is easier so use auto.
if you need to know which options you can use, try with any random string in your region and:

sudo -E -u discourse bundle exec rake s3:upload_assets

If you use nixos

sudo discourse-rake s3:upload_assets

This will spit out an error for your valid options


2. API permissions


It’s also important to know that the restrictive API tokens do not work. You have to use the Admin Read & Write
Object Read & Write did not work

3 Likes

Error when running sudo -E -u discourse bundle exec rake s3:upload_assets @Eviepayne

Set region to auto.
You might also have to set:
DISCOURSE_S3_INSTALL_CORS_RULE: false

I did both of those and rebuilt app.yml:

  ## S3 Configuration
  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: auto
  DISCOURSE_S3_ACCESS_KEY_ID: XXX
  DISCOURSE_S3_SECRET_ACCESS_KEY: XXX
  DISCOURSE_S3_CDN_URL: https://pub-XXX.r2.dev
  DISCOURSE_S3_ENDPOINT: https://XXX.r2.cloudflarestorage.com/XXX
  DISCOURSE_S3_BUCKET: XXX
  DISCOURSE_S3_INSTALL_CORS_RULE: false

I also confirmed the API keys are account API keys instead of just bucket-specific keys(as mentioned in the post). Also my Discourse instance shows this:

And after running sudo -E -u discourse bundle exec rake s3:upload_assets it shows:

`/root` is not writable.
Bundler will use `/tmp/bundler20250410-2363-zj2g6x2363' as your home directory temporarily.
Installing CORS rules...
skipping
rake aborted!
Seahorse::Client::NetworkingError: Empty or incomplete response body (Seahorse::Client::NetworkingError)
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-s3-1.182.0/lib/aws-sdk-s3/plugins/sse_cpk.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-s3-1.182.0/lib/aws-sdk-s3/plugins/dualstack.rb:21:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-s3-1.182.0/lib/aws-sdk-s3/plugins/accelerate.rb:43:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/checksum_algorithm.rb:169:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:16:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/invocation_id.rb:16:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/seahorse/client/plugins/request_callback.rb:89:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/response_paging.rb:12:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/seahorse/client/plugins/response_target.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/telemetry.rb:39:in `block in call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/telemetry/no_op.rb:29:in `in_span'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/telemetry.rb:53:in `span_wrapper'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/telemetry.rb:39:in `call'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/seahorse/client/request.rb:72:in `send_request'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-s3-1.182.0/lib/aws-sdk-s3/client.rb:12654:in `list_objects_v2'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-s3-1.182.0/lib/aws-sdk-s3/bucket.rb:1513:in `block (2 levels) in objects'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/plugins/user_agent.rb:69:in `metric'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-s3-1.182.0/lib/aws-sdk-s3/bucket.rb:1512:in `block in objects'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:101:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:101:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:101:in `block in non_empty_batches'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:52:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:52:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:52:in `block in each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:58:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:58:in `each'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/aws-sdk-core-3.219.0/lib/aws-sdk-core/resources/collection.rb:58:in `each'
/var/www/discourse/lib/tasks/s3.rake:14:in `map'
/var/www/discourse/lib/tasks/s3.rake:14:in `existing_assets'
/var/www/discourse/lib/tasks/s3.rake:24:in `should_skip?'
/var/www/discourse/lib/tasks/s3.rake:36:in `upload'
/var/www/discourse/lib/tasks/s3.rake:197:in `block (2 levels) in <main>'
/var/www/discourse/lib/tasks/s3.rake:197:in `each'
/var/www/discourse/lib/tasks/s3.rake:197:in `block in <main>'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rake-13.2.1/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => s3:upload_assets
(See full trace by running task with --trace)

I think you might have to remove the bucket name from the endpoint.
the trailing /xxx should be removed so it’s just .com

Rebuilding and will re-run command, thank you for helping me with this!

My app.yml looks like this below now:

  ## S3 Configuration
  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: auto
  DISCOURSE_S3_ACCESS_KEY_ID: XXX
  DISCOURSE_S3_SECRET_ACCESS_KEY: XXX
  DISCOURSE_S3_CDN_URL: https://pub-XXX.r2.dev
  DISCOURSE_S3_ENDPOINT: https://XXX.r2.cloudflarestorage.com
  DISCOURSE_S3_BUCKET: XXX
  DISCOURSE_S3_INSTALL_CORS_RULE: false

I believe all of that is correct.
Be sure the the CDN_URL (https://pub-xxx.r2.dev)
has public read access so anonymous users can see the assets.
You can tell what’s going on in the browsers dev tools. You’ll get a bunch of 403s and red requests in the network tab if the permissions are wrong.

Yes, I believe so:

Is this the right setting:

That’s one way to do it but that’s not the recommended way, and you’ll experience issues.
Assuming you already have your domain and cloudflare is your DNS already:

cloudflare will automatically proxy can do caching for that domain.
you can then change the CDN_URL to that custom domain.

Oh, I need to connect the custom domain to the bucket?

inside the S3 bucket settings there’s a public access setting.
set a unique subdomain for it. (cloudflare will automatically make the DNS record for you as well as proxying and cache)

I think I have it?

Have you gotten backups to also work to Cloudflare R2, and is it possible(assuming backups to Cloudflare R2 are possible), to make it so it backups both locally and to Cloudflare R2?

Also does the script uploading all the assets mean that it will delete them locally(to free up storage)? Or is there a separate procedure I need to do that?

Thank you for taking the time to help me with this :slight_smile:

I personally haven’t tried.
My forum falls under the “unsupported” category because my database is external and I have a different backup strategy than the pg_dumps the forum uses.
from what I hear backups don’t work on cloudflare but nothing stopping you from trying it.

1 Like