Configure an S3 compatible object storage provider for uploads

Contabo S3 support/compatibility will be added? or someone found a walkaround to make it works?

2 Likes

We, the Discourse maintainers, only support AWS S3. The providers listed here were tested by either us or the community to see if they implement enough of the S3 API to be compatible with Discourse.

Per the OP, @tuxed tested Contabo and found it lacking. Itā€™s on Contabo to evolve their implementation compliance with S3, if they deem it aligned with their business interests, not something that can do.

3 Likes

Is this still buggy? Why Digitalocean CDN not good?

1 Like

Did you follow the links?

It looks like the CDN doesnā€™t now about metadata. But you could try it an see if it works! Let us know if you do. I was wondering if it had been fixed just the other day. From the looks of the documentation, Iā€™m not going to be trying it myself any time soon.

1 Like

I am looking for an easy way to add cdn support to my forum on digitalocean. If S3 is easier I would go with that option.

Donā€™t want to take risk with a setup that doesnā€™t worked well before.

1 Like

The recommended solution is just not to use their CDN. You can use spaces, if you follow the instructions above and something like bunny.net for the CDN. Itā€™s cheap and easy.

Aws S3 is what cdck uses, so itā€™s a bit better tested and supported, but unless youā€™re already handy with aws, the spaces bucket is a good solution. Just donā€™t use the digital ocean cdn.

1 Like

I just went through this - CDN setup, keeping images local for now - first with fastly, then some other I donā€™t recall. Settled on bunny.net. very easy to set up. They have how-to specific to Discourse. We are self-hosted in DO with 100GB+ of images. 65% cache hit rate and climbing.

2 Likes

s3 configure tombstone policy only works on aws.amazon ?

1 Like

No. Itā€™s a problem only on backblaze.

2 Likes

3 posts were split to a new topic: Exploring Solutions for User Profile Picture Upload Problems

Whew, where do I start? So, Iā€™m using Cloudflare for caching and DNS, and Backblaze B2 for storage. I was able to get it to work, but only partially. During a ./launcher rebuild app I saw that it was uploading assets, so I was super excited that it appeared to be working. After it completed the rebuild successfully, I was unable to access the site. I just get some moving dots in the middle of the page.

Based on the Backblaze article Deliver Public Backblaze B2 Content Through Cloudflare CDN I have set a Proxied CNAME record that points to the Friendly URL origin f000.backblazeb2.com called gtech-cdn.

CNAME gtech-cdn ā†’ f000.backblazeb2.com

The article also talks about Page Rules; I have tried toggling them on and off to no avail.

Hereā€™s the pertinent configuration items:

  DISCOURSE_HOSTNAME: mmhmm.com

  DISCOURSE_CDN_URL: https://mmhmm.com

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: us-west-000
  DISCOURSE_S3_ENDPOINT: https://s3.us-west-000.backblazeb2.com
  DISCOURSE_S3_ACCESS_KEY_ID: <secret>
  DISCOURSE_S3_SECRET_ACCESS_KEY: <secret>
  DISCOURSE_S3_CDN_URL: https://gtech-cdn.mmhmm.com
  DISCOURSE_S3_BUCKET: gtech-uploads
  DISCOURSE_S3_BACKUP_BUCKET: gtech-uploads/backups
  DISCOURSE_BACKUP_LOCATION: s3

Under the **hooks:** section...

  after_assets_precompile:
    - exec:
        cd: $home
        cmd:
          - sudo -E -u discourse bundle exec rake s3:upload_assets
          - sudo -E -u discourse bundle exec rake s3:expire_missing_assets

One of the things that is confusing to me is what to put in the two variables DISCOURSE_S3_CDN_URL, and DISCOURSE_CDN_URL. Do I have them set properly based on the info that I have provided?

Looking at the browser dev tools console, Iā€™m getting 404 errors on .js scripts. The URL doesnā€™t appear that it is being built properly. Shouldnā€™t it have /file/ in there before /assets? If I add that manually to create a proper URL, it works:

https://gtech-cdn.mmhmm.com/file/gtech-uploads/assets/google-universal-analytics-v4-e154af4adb3c483a3aba7f9a7229b8881cdc5cf369290923d965a2ad30163ae8.br.js

Thanks for any help, itā€™s much appreciated!!!

1 Like

https://gtech-cdn.mmhmm.com doesnā€™t resolve, so thatā€™s the first thing to fix. .

Iā€™m not sure that you can use cloudflare as acdn like that, but maybe Iā€™m wrong.

1 Like

Sorry, I should have mentioned that mmhmm.com is a fake domain. It does respond to a ping.

As far as Cloudflare not being able to be used as a CDN, I guess Iā€™m not following. The article I linked clearly is for using it as a CDN. If that is not true, then again, what values are to be used in the two variables DISCOURSE_S3_CDN_URL, and DISCOURSE_CDN_URL?

Cheers,

If you give fake urls you can only get fake answers.

Does the url serve the data that is expected? Can you retrieve it from the forum url?

I think the s3 cdn should work. Itā€™s using the forum url for the forum cdn they Iā€™m not sure about.

A normal cdn is a different url than the forum and the cdn can count on the data being static rather than having to guess whatā€™s dynamic

1 Like

I do my best not to plaster my information across various forums, so please excuse my secrecy on the matter.

The forum sits at ā€œhttps://mmhmm.comā€, which is a Cloudflare DNS record that is proxied (cached). Prior to configuring Discourse to use Backblaze it all functioned properly.

ā€œhttps://gtech-cdn.mmhmm.comā€, as stated previously, resolves and also responds to a ping. The target of the CNAME record, f000.backblazeb2.com, also resolves. That B2 Friendly URL origin is what the article instructs you to use. That isnā€™t the issue though. The issue is that Discourse is serving up URLs for the .js files using an invalid URL that will never work as it is missing that ā€œ/file/gtech-cdnā€ part of the path. If you take one of those incomplete .js URLS and add that missing info to it manually, it will load the text of the .js file just fine.

Of course, Iā€™m still trying to wrap my head around how this is all supposed to be working with those two variables. Iā€™m more of a visual learner and could really use a flow chart or something to help me understand whatā€™s supposed to be happening with the interactions between Cloudflare CDN, Discourse, and Backblaze B2.

Thanks for your help.

Oh, and Iā€™ll try to address your last sentence about a normal cdnā€¦

The article from Backblaze has you create two page rules per bucket (in my case 1 bucket is used), which if I am understanding it correctly is sort of like a firewall rule in the way it processes.

Rule 1 says that https://gtech-cdn.mmhmm.com/file/*/* should use standard caching (which is set elsewhere in Cloudflare to 1 month)
Rule 2 redirects anything (302 - temp redir) that doesnā€™t match the Rule 1 pattern.

So not everything will be cached by going to mmhmm.comā€¦ at least that is my understanding

EDIT: This did not work.
Focusing on this a little bit more, I decided for obvious reasons that I should use the S3 URL as the CNAME target instead of the Friendly URL that the Backblaze article suggested. Iā€™m now just waiting on the DNS record TTL to expire.

image

Regarding this hook:

I donā€™t see anything with s3 in the rake --tasks dump. Is this still relevant or am I missing some plugin?

Also seeing this when I manually run:
uploads:migrate_to_s3

rake aborted!
FileStore::ToS3MigrationError: Some uploads could not be migrated to the new scheme. You need to fix this manually. (FileStore::ToS3MigrationError)
/var/www/discourse/lib/file_store/to_s3_migration.rb:156:in `migrate_to_s3'
/var/www/discourse/lib/file_store/to_s3_migration.rb:59:in `migrate'
/var/www/discourse/lib/tasks/uploads.rake:126:in `migrate_to_s3'
/var/www/discourse/lib/tasks/uploads.rake:106:in `block in migrate_to_s3_all_sites'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management/null_instance.rb:36:in `each_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management.rb:21:in `each_connection'
/var/www/discourse/lib/tasks/uploads.rake:104:in `migrate_to_s3_all_sites'
/var/www/discourse/lib/tasks/uploads.rake:100:in `block in <main>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => uploads:migrate_to_s3
(See full trace by running task with --trace)
root@ubuntu-s-2vcpu-4gb-nyc2-01-app:/var/www/discourse# 
root@ubuntu-s-2vcpu-4gb-nyc2-01-app:/var/www/discourse# rake uploads:migrate_to_s3
Please note that migrating to S3 is currently not reversible! 

6 posts were split to a new topic: Cloudflare R2: Navigating Setup and Handling Configuration Errors