Configure an S3 compatible object storage provider for uploads

The setting allowed me to specify the local site as the origin, to get around the need for the js assets to be on the S3 site (in this case Cloudflare or Digital Ocean Spaces with CDN enabled). Thanks to @david for the change, even if that wasn’t the intention.

4 Likes

Do you enter the site url for the asset cdn? Clever!

1 Like

Hi folks, anybody knows if that could be related with Discourse?

That’s the XML of the files that we tried to upload to our previously ‘working with Discourse’ S3 storage:

<Error>
<Code>InvalidArgument</Code>
<Message>
Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.
</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>ID</RequestId>
<HostId>
ID
</HostId>
</Error>
1 Like

Are you using AWS? Something else?

Is that bucket configured with server side encryption?

It could be that a library got updated and is behaving differently.

2 Likes

Thanks, I double-checked and it seems to work with auto configuration but not managing my own keys from S3 management.

Do you know if can be possible within Discourse?

1 Like

3 posts were split to a new topic: Why run UpdatePostUploadsSecureStatus even when secure uploads is disabled?

this seems to have been fixed recently.
In the 2023-3-16 changelog it lists bug fix for gzip files handling.

We are running our discourse forum at discourse.aosus.org with R2 right now(haven’t run migrate_to_s3 yet), and it seems to be OK!, no noticeable issues so far.

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: "us-east-1" #alias to auto
  #DISCOURSE_S3_INSTALL_CORS_RULE: true #it should be supported
  DISCOURSE_S3_ENDPOINT: S3_API_URL
  DISCOURSE_S3_ACCESS_KEY_ID: xxx
  DISCOURSE_S3_SECRET_ACCESS_KEY: xxxx
  DISCOURSE_S3_CDN_URL: your cdn url
  DISCOURSE_S3_BUCKET: BUCKET_NAME

is there a way to specify a separate hosts for backups?, it would be great if its possible to leave R2 just for CDN stuff.

2 Likes

There is not. It seems unlikely to me that this will change.

1 Like

23 posts were split to a new topic: Troubles configuring Object Storage

It’s wired that the settings in ENV do not reflect in admin UI. Does overriding happen? Will new settings of S3 in admin UI override those in environment?

1 Like

Yes. Env variables override values on the database and are hidden from the UX.

4 Likes

@Falco - It could be good to add a warning for Scaleway, that it only supports 1,000 parts for multipart upload, while AWS supports 10,000. This is not a problem for regular uploads, but it is an issue for backup uploads over a certain size as the S3 SDK will use 10,000 parts unless manually modified and fail.

https://www.scaleway.com/en/docs/storage/object/api-cli/multipart-uploads/

4 Likes

Great finding! Please add it to the OP wiki if you can.

3 Likes

Thank you, also I would to add that you can use any of these tools to copy from cloud to cloud, especially to/from S3 compatible object storage, for example, Rclone, Shargate, Gs Richcopy360 and GoodSync. all of these are compatible with similar clouds

1 Like

We have just discovered a problem, Cloudflare R2 doesn’t allow public-read from the S3 endpoint URL, rather only the custom domain or a random r2.dev domain.
(Pre-signed downloads work, just no direct public access is supported.)
But discourse only uses the CDN URL for embedded images, and not direct downloads, which use the S3 endpoint URL.
Is there a way to make it use the CDN URL for all files, or force the use of a presigned url?

Related:

The workaround mentioned in that post works, add ?dl=1 fixes it, because its forces discourse to use a presigned S3 URL.

1 Like

Fixed in 2023-03-16 now R2 working with Discourse like charm with free plan

3 Likes

I also see this with some frequency (every several months) even though my Discourse is running in AWS Lightsail and I’m uploading to AWS S3. So I’m not sure it’s wasabi’s fault.

Would it be possible to catch this error and alert the admin? I do check the disk space and remove old backups when I upgrade but sometimes that’s too late and the forum goes down for no disk space.

1 Like

I am fairly certain that the issue was that automatic OS reboots for security updates were happening while the backup was running. Make sure that you schedule your OS reboots and your backups at different times. It was after I’d moved that site from wasabi that I came up with this explanation, but I’m pretty sure that’s what it was.

2 Likes

uptime says it’s been up for 300 days so I don’t think that’s the problem. But along similar lines, I had Discourse backups scheduled at 2:00 am and Lightsail snapshots at 2:30 am, so maybe the upload sometimes isn’t complete and the snapshot messes with it. I’ve separated the two operations by an hour- we’ll see if it makes a difference.

Regardless I think it’s reasonable to warn admins if the upload fails, for whatever reason.

2 Likes

I think that it’s time that you do kernel upgrades and reboot. :slight_smile:

Could you be running out of ram?