The setting allowed me to specify the local site as the origin, to get around the need for the js assets to be on the S3 site (in this case Cloudflare or Digital Ocean Spaces with CDN enabled). Thanks to @david for the change, even if that wasn’t the intention.
this seems to have been fixed recently.
In the 2023-3-16 changelog it lists bug fix for gzip files handling.
We are running our discourse forum at discourse.aosus.org with R2 right now(haven’t run migrate_to_s3 yet), and it seems to be OK!, no noticeable issues so far.
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: "us-east-1" #alias to auto
#DISCOURSE_S3_INSTALL_CORS_RULE: true #it should be supported
DISCOURSE_S3_ENDPOINT: S3_API_URL
DISCOURSE_S3_ACCESS_KEY_ID: xxx
DISCOURSE_S3_SECRET_ACCESS_KEY: xxxx
DISCOURSE_S3_CDN_URL: your cdn url
DISCOURSE_S3_BUCKET: BUCKET_NAME
is there a way to specify a separate hosts for backups?, it would be great if its possible to leave R2 just for CDN stuff.
It’s wired that the settings in ENV do not reflect in admin UI. Does overriding happen? Will new settings of S3 in admin UI override those in environment?
@Falco - It could be good to add a warning for Scaleway, that it only supports 1,000 parts for multipart upload, while AWS supports 10,000. This is not a problem for regular uploads, but it is an issue for backup uploads over a certain size as the S3 SDK will use 10,000 parts unless manually modified and fail.
Thank you, also I would to add that you can use any of these tools to copy from cloud to cloud, especially to/from S3 compatible object storage, for example, Rclone, Shargate, Gs Richcopy360 and GoodSync. all of these are compatible with similar clouds
We have just discovered a problem, Cloudflare R2 doesn’t allow public-read from the S3 endpoint URL, rather only the custom domain or a random r2.dev domain.
(Pre-signed downloads work, just no direct public access is supported.)
But discourse only uses the CDN URL for embedded images, and not direct downloads, which use the S3 endpoint URL.
Is there a way to make it use the CDN URL for all files, or force the use of a presigned url?
Related:
The workaround mentioned in that post works, add ?dl=1 fixes it, because its forces discourse to use a presigned S3 URL.
I also see this with some frequency (every several months) even though my Discourse is running in AWS Lightsail and I’m uploading to AWS S3. So I’m not sure it’s wasabi’s fault.
Would it be possible to catch this error and alert the admin? I do check the disk space and remove old backups when I upgrade but sometimes that’s too late and the forum goes down for no disk space.
I am fairly certain that the issue was that automatic OS reboots for security updates were happening while the backup was running. Make sure that you schedule your OS reboots and your backups at different times. It was after I’d moved that site from wasabi that I came up with this explanation, but I’m pretty sure that’s what it was.
uptime says it’s been up for 300 days so I don’t think that’s the problem. But along similar lines, I had Discourse backups scheduled at 2:00 am and Lightsail snapshots at 2:30 am, so maybe the upload sometimes isn’t complete and the snapshot messes with it. I’ve separated the two operations by an hour- we’ll see if it makes a difference.
Regardless I think it’s reasonable to warn admins if the upload fails, for whatever reason.