@Falco - It could be good to add a warning for Scaleway, that it only supports 1,000 parts for multipart upload, while AWS supports 10,000. This is not a problem for regular uploads, but it is an issue for backup uploads over a certain size as the S3 SDK will use 10,000 parts unless manually modified and fail.
Thank you, also I would to add that you can use any of these tools to copy from cloud to cloud, especially to/from S3 compatible object storage, for example, Rclone, Shargate, Gs Richcopy360 and GoodSync. all of these are compatible with similar clouds
We have just discovered a problem, Cloudflare R2 doesnāt allow public-read from the S3 endpoint URL, rather only the custom domain or a random r2.dev domain.
(Pre-signed downloads work, just no direct public access is supported.)
But discourse only uses the CDN URL for embedded images, and not direct downloads, which use the S3 endpoint URL.
Is there a way to make it use the CDN URL for all files, or force the use of a presigned url?
Related:
The workaround mentioned in that post works, add ?dl=1 fixes it, because its forces discourse to use a presigned S3 URL.
I also see this with some frequency (every several months) even though my Discourse is running in AWS Lightsail and Iām uploading to AWS S3. So Iām not sure itās wasabiās fault.
Would it be possible to catch this error and alert the admin? I do check the disk space and remove old backups when I upgrade but sometimes thatās too late and the forum goes down for no disk space.
I am fairly certain that the issue was that automatic OS reboots for security updates were happening while the backup was running. Make sure that you schedule your OS reboots and your backups at different times. It was after Iād moved that site from wasabi that I came up with this explanation, but Iām pretty sure thatās what it was.
uptime says itās been up for 300 days so I donāt think thatās the problem. But along similar lines, I had Discourse backups scheduled at 2:00 am and Lightsail snapshots at 2:30 am, so maybe the upload sometimes isnāt complete and the snapshot messes with it. Iāve separated the two operations by an hour- weāll see if it makes a difference.
Regardless I think itās reasonable to warn admins if the upload fails, for whatever reason.
After implementing remote Backblaze backups, I see this error in my dashboard:
The server is configured to upload files to S3, but there is no S3 CDN configured. This can lead to expensive S3 costs and slower site performance. See āUsing Object Storage for Uploadsā to learn more.
I didnāt configure uploading of files, I only configured backups via this config:
It has happened once in the past month since I separated the two processes by an hour, so it didnāt āfixā it, and it doesnāt happen often enough to say whether it helped.
On the bright side, I noticed there is a backup status section on the admin page that shows available disk space, which saves me from constantly opening a terminal and doing a df just to check for stuck backups. I customized the text to remind myself that I expect around 80 GB free.
I just couldnāt get this to work with Scaleway using the Bitnami Discourse image.
The env variables were set but clearly werenāt being read/applied correctly (or at all?).
So Iāve set the S3 variables in the admin panel and set the region directly in the rails console (still hoping that this just becomes a text field): SiteSetting.s3_region="fr-par"
It gave me a validation error, but I just commented out the validation check before updating the setting, then put it in again.
The Bitnami image isnāt packaged by us and donāt follow our recommendations. Everything documented here is only tested against the official install.
This has been solved by enabling ās3 use cdn url for all uploadsā, an option recently added by discourse.
since we were using R2 before, we needed to use discourse remap to manually replace the broken links, and synced s3 files just in case, and then we rebaked all posts.
Iām trying to set this up with idrive e2, which is s3 compatible. However Iām getting a not very helpful error/stack trace at the end of ./launcher rebuild app:
I, [2023-10-14T15:08:08.026184 #1] INFO -- : > cd /var/www/discourse && sudo -E -u discourse bundle exec rake s3:upload_assets
rake aborted!
Aws::S3::Errors::InternalError: We encountered an internal error, please try again.
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/sse_cpk.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/dualstack.rb:27:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/accelerate.rb:56:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/checksum_algorithm.rb:111:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:22:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/request_callback.rb:71:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/response_paging.rb:12:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/response_target.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/request.rb:72:in `send_request'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/client.rb:12369:in `put_object'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/object.rb:1472:in `put'
/var/www/discourse/lib/s3_helper.rb:78:in `upload'
/var/www/discourse/lib/tasks/s3.rake:41:in `block in upload'
/var/www/discourse/lib/tasks/s3.rake:41:in `open'
/var/www/discourse/lib/tasks/s3.rake:41:in `upload'
/var/www/discourse/lib/tasks/s3.rake:197:in `block (2 levels) in <main>'
/var/www/discourse/lib/tasks/s3.rake:197:in `each'
/var/www/discourse/lib/tasks/s3.rake:197:in `block in <main>'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => s3:upload_assets
(See full trace by running task with --trace)
I, [2023-10-14T15:08:16.413098 #1] INFO -- : Installing CORS rules...
skipping
Uploading: assets/admin-2ebebf57104b0beb47a1c82fe5a8c6decd07f60a706640345fed296a094d1536.js
This is the config Iāve been using, but Iāve also tried it with DISCOURSE_S3_CONFIGURE_TOMBSTONE_POLICY and DISCOURSE_S3_HTTP_CONTINUE_TIMEOUT
Note Iām not using it for backups (thatās already setup in the UI with backblaze) nor DISCOURSE_CDN_URL because Iām not sure idrive supports that - I planned on experimenting with that once I got some actual files in the bucket.