Had to park this for now, as it looked like was going to work but then there is something odd going on with R2 in terms of content encoding with the assets either on uploading and not setting the header or something else. It’ll choke on a ‘Invalid or unexpected token’ given the gz asset of something like browser-detect-7af298cd000a967d2bdc01b04807eda2924a388584ea38ad84919b726283c2ed.gz.js. The rake s3:upload_assets seems to be working but the files aren’t being read correctly on the browser side.
I don’t really get why with AWS S3 it is fine using the local server URL for assets (they don’t exist on our existing S3 bucket for uploads) but for R2 use it wants to use DISCOURSE_S3_CDN_URL for assets only. If I could force the assets to be from the server URL this would probably all work.
EDIT: Chatting on the CF, this seems to be the issue, and as of today why R2 can’t be used with Discourse without some changes. I could script something in the post hook step to remove the gz assets but I feel I’m already ‘off the path’ far enough for one day:
Files that you gzip are not currently handled correctly by R2. You have to upload uncompressed files. Cloudflare has transparent compression, they pick identity, gzip, or Brotli based on what the client can handle. This is a difference from S3.
Thank you for putting together this guide! I have had some success using Minio.
For anyone else who is trying to set it up locally with Docker Compose, you can tell Docker to add a hostname alias so that it works as a subdomain, like this:
In this case, you would set DISCOURSE_S3_ENDPOINT=http://minio.mydomain.com:9000, DISCOURSE_S3_CDN_URL=//assets.minio.mydomain.com:9000, and set your local /etc/hosts/ file to point the subdomain to localhost.
Hey @Falco - Is this referring to the way the Content-Encoding: gzip header works with their Spaces CDN? That sounds similar to Cloudflare R2, in that the asset locations is made to be the same as the uploads CDN, so the gzip breaks? Here’s what happens with R2 today.
It might be worth considering a toggle for that behavior, i.e. serve assets from origin rather than always DISCOURSE_S3_CDN_URL? I’ll happily go look to see how to do this, if it would be considered as a potential config change.
That’s what should happen if you omit configuring DISCOURSE_S3_CDN_URL but since it’s a weird corner case, and a potential expensive mistake, it’s not a common configuration.
Yep, I can understand that. A new GlobalSetting bool S3_ORIGIN_ASSETS (or S3_BROKEN_PROXY_FUDGE ) entry around about here, sort of like for how the test scripts aren’t compressed would allow Digital Ocean Spaces and Cloudflare R2 storage and CDN to work with Discourse out of the box though, which is a nice feature add for not much effort? Maybe for future consideration anyway.
Oh, I saw on the 3.0.beta release notes there’s something added. I’ll give it a go, unless I misunderstand what it’s for? It might allow Cloudflare R2 and Digital Ocean Spaces to be used with their CDNs doing that weird stuff with gzip.
The setting allowed me to specify the local site as the origin, to get around the need for the js assets to be on the S3 site (in this case Cloudflare or Digital Ocean Spaces with CDN enabled). Thanks to @david for the change, even if that wasn’t the intention.
this seems to have been fixed recently.
In the 2023-3-16 changelog it lists bug fix for gzip files handling.
We are running our discourse forum at discourse.aosus.org with R2 right now(haven’t run migrate_to_s3 yet), and it seems to be OK!, no noticeable issues so far.
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: "us-east-1" #alias to auto
#DISCOURSE_S3_INSTALL_CORS_RULE: true #it should be supported
DISCOURSE_S3_ENDPOINT: S3_API_URL
DISCOURSE_S3_ACCESS_KEY_ID: xxx
DISCOURSE_S3_SECRET_ACCESS_KEY: xxxx
DISCOURSE_S3_CDN_URL: your cdn url
DISCOURSE_S3_BUCKET: BUCKET_NAME
is there a way to specify a separate hosts for backups?, it would be great if its possible to leave R2 just for CDN stuff.
It’s wired that the settings in ENV do not reflect in admin UI. Does overriding happen? Will new settings of S3 in admin UI override those in environment?