Contabo S3 support/compatibility will be added? or someone found a walkaround to make it works?
We, the Discourse maintainers, only support AWS S3. The providers listed here were tested by either us or the community to see if they implement enough of the S3 API to be compatible with Discourse.
Per the OP, @tuxed tested Contabo and found it lacking. Itās on Contabo to evolve their implementation compliance with S3, if they deem it aligned with their business interests, not something that can do.
Is this still buggy? Why Digitalocean CDN not good?
Did you follow the links?
It looks like the CDN doesnāt now about metadata. But you could try it an see if it works! Let us know if you do. I was wondering if it had been fixed just the other day. From the looks of the documentation, Iām not going to be trying it myself any time soon.
I am looking for an easy way to add cdn support to my forum on digitalocean. If S3 is easier I would go with that option.
Donāt want to take risk with a setup that doesnāt worked well before.
The recommended solution is just not to use their CDN. You can use spaces, if you follow the instructions above and something like bunny.net for the CDN. Itās cheap and easy.
Aws S3 is what cdck uses, so itās a bit better tested and supported, but unless youāre already handy with aws, the spaces bucket is a good solution. Just donāt use the digital ocean cdn.
I just went through this - CDN setup, keeping images local for now - first with fastly, then some other I donāt recall. Settled on bunny.net. very easy to set up. They have how-to specific to Discourse. We are self-hosted in DO with 100GB+ of images. 65% cache hit rate and climbing.
s3 configure tombstone policy only works on aws.amazon ?
No. Itās a problem only on backblaze.
3 posts were split to a new topic: Exploring Solutions for User Profile Picture Upload Problems
Whew, where do I start? So, Iām using Cloudflare for caching and DNS, and Backblaze B2 for storage. I was able to get it to work, but only partially. During a ./launcher rebuild app I saw that it was uploading assets, so I was super excited that it appeared to be working. After it completed the rebuild successfully, I was unable to access the site. I just get some moving dots in the middle of the page.
Based on the Backblaze article Deliver Public Backblaze B2 Content Through Cloudflare CDN I have set a Proxied CNAME record that points to the Friendly URL origin f000.backblazeb2.com called gtech-cdn.
CNAME gtech-cdn ā f000.backblazeb2.com
The article also talks about Page Rules; I have tried toggling them on and off to no avail.
Hereās the pertinent configuration items:
DISCOURSE_HOSTNAME: mmhmm.com
DISCOURSE_CDN_URL: https://mmhmm.com
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: us-west-000
DISCOURSE_S3_ENDPOINT: https://s3.us-west-000.backblazeb2.com
DISCOURSE_S3_ACCESS_KEY_ID: <secret>
DISCOURSE_S3_SECRET_ACCESS_KEY: <secret>
DISCOURSE_S3_CDN_URL: https://gtech-cdn.mmhmm.com
DISCOURSE_S3_BUCKET: gtech-uploads
DISCOURSE_S3_BACKUP_BUCKET: gtech-uploads/backups
DISCOURSE_BACKUP_LOCATION: s3
Under the **hooks:** section...
after_assets_precompile:
- exec:
cd: $home
cmd:
- sudo -E -u discourse bundle exec rake s3:upload_assets
- sudo -E -u discourse bundle exec rake s3:expire_missing_assets
One of the things that is confusing to me is what to put in the two variables DISCOURSE_S3_CDN_URL, and DISCOURSE_CDN_URL. Do I have them set properly based on the info that I have provided?
Looking at the browser dev tools console, Iām getting 404 errors on .js scripts. The URL doesnāt appear that it is being built properly. Shouldnāt it have /file/ in there before /assets? If I add that manually to create a proper URL, it works:
https://gtech-cdn.mmhmm.com/file/gtech-uploads/assets/google-universal-analytics-v4-e154af4adb3c483a3aba7f9a7229b8881cdc5cf369290923d965a2ad30163ae8.br.js
Thanks for any help, itās much appreciated!!!
https://gtech-cdn.mmhmm.com doesnāt resolve, so thatās the first thing to fix. .
Iām not sure that you can use cloudflare as acdn like that, but maybe Iām wrong.
Sorry, I should have mentioned that mmhmm.com is a fake domain. It does respond to a ping.
As far as Cloudflare not being able to be used as a CDN, I guess Iām not following. The article I linked clearly is for using it as a CDN. If that is not true, then again, what values are to be used in the two variables DISCOURSE_S3_CDN_URL, and DISCOURSE_CDN_URL?
Cheers,
If you give fake urls you can only get fake answers.
Does the url serve the data that is expected? Can you retrieve it from the forum url?
I think the s3 cdn should work. Itās using the forum url for the forum cdn they Iām not sure about.
A normal cdn is a different url than the forum and the cdn can count on the data being static rather than having to guess whatās dynamic
I do my best not to plaster my information across various forums, so please excuse my secrecy on the matter.
The forum sits at āhttps://mmhmm.comā, which is a Cloudflare DNS record that is proxied (cached). Prior to configuring Discourse to use Backblaze it all functioned properly.
āhttps://gtech-cdn.mmhmm.comā, as stated previously, resolves and also responds to a ping. The target of the CNAME record, f000.backblazeb2.com, also resolves. That B2 Friendly URL origin is what the article instructs you to use. That isnāt the issue though. The issue is that Discourse is serving up URLs for the .js files using an invalid URL that will never work as it is missing that ā/file/gtech-cdnā part of the path. If you take one of those incomplete .js URLS and add that missing info to it manually, it will load the text of the .js file just fine.
Of course, Iām still trying to wrap my head around how this is all supposed to be working with those two variables. Iām more of a visual learner and could really use a flow chart or something to help me understand whatās supposed to be happening with the interactions between Cloudflare CDN, Discourse, and Backblaze B2.
Thanks for your help.
Oh, and Iāll try to address your last sentence about a normal cdnā¦
The article from Backblaze has you create two page rules per bucket (in my case 1 bucket is used), which if I am understanding it correctly is sort of like a firewall rule in the way it processes.
Rule 1 says that https://gtech-cdn.mmhmm.com/file/*/* should use standard caching (which is set elsewhere in Cloudflare to 1 month)
Rule 2 redirects anything (302 - temp redir) that doesnāt match the Rule 1 pattern.
So not everything will be cached by going to mmhmm.comā¦ at least that is my understanding
EDIT: This did not work.
Focusing on this a little bit more, I decided for obvious reasons that I should use the S3 URL as the CNAME target instead of the Friendly URL that the Backblaze article suggested. Iām now just waiting on the DNS record TTL to expire.
Regarding this hook:
I donāt see anything with s3 in the rake --tasks
dump. Is this still relevant or am I missing some plugin?
Also seeing this when I manually run:
uploads:migrate_to_s3
rake aborted!
FileStore::ToS3MigrationError: Some uploads could not be migrated to the new scheme. You need to fix this manually. (FileStore::ToS3MigrationError)
/var/www/discourse/lib/file_store/to_s3_migration.rb:156:in `migrate_to_s3'
/var/www/discourse/lib/file_store/to_s3_migration.rb:59:in `migrate'
/var/www/discourse/lib/tasks/uploads.rake:126:in `migrate_to_s3'
/var/www/discourse/lib/tasks/uploads.rake:106:in `block in migrate_to_s3_all_sites'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management/null_instance.rb:36:in `each_connection'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-6.0.0/lib/rails_multisite/connection_management.rb:21:in `each_connection'
/var/www/discourse/lib/tasks/uploads.rake:104:in `migrate_to_s3_all_sites'
/var/www/discourse/lib/tasks/uploads.rake:100:in `block in <main>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => uploads:migrate_to_s3
(See full trace by running task with --trace)
root@ubuntu-s-2vcpu-4gb-nyc2-01-app:/var/www/discourse#
root@ubuntu-s-2vcpu-4gb-nyc2-01-app:/var/www/discourse# rake uploads:migrate_to_s3
Please note that migrating to S3 is currently not reversible!
6 posts were split to a new topic: Cloudflare R2: Navigating Setup and Handling Configuration Errors
Looks like Cloudflare works now:
See Cloudflare R2: Navigating Setup and Handling Configuration Errors - #13 by pfaffman