Configure an S3 compatible object storage provider for uploads

Yes, I don’t have script-src warnings when I put the two d23whatever.cloudfront.net URLs in the env variables. When I put my custom URLs, i.e. community-cdn.mydomain and files-cdn.mydomain, in the env variables, that’s the time I get these script-src warnings. And apparently the stripe js is still giving me this warning even though it’s in my content security policy script src.

2 Likes

I set up S3 Uploads and object storage as outlined here in the OP, but without a CDN.

For the DISCOURSE_S3_CDN_URL variable, I have this:
https://my-bucket-uploads.s3.dualstack.us-west-2.amazonaws.com

All seems fine, including backups, however, in the console this error shows up when a reply to a post is started:

The request url in the error is actually a string of two urls which seems like the cause?

https://mydiscourse.com/t/uploads-test-for-s3/79/https://my-bucket-uploads.s3.dualstack.us-west-2.amazonaws.com/assets/markdown-it-bundle-a7328b73d3e7b030770eab70f10bdb0af655b3d8fa929bc49f1ad04c4cdaa198.br.js

2 Likes

A CDN is mandatory for it to work correctly.

4 Likes

I’m also in this situation with an object store configured (minio) but no CDN. Is it a use case that could be supported ?

From what I’m seeing so far in my tests there is only the markdown-it-bundle js file that is having issues as its pointing to the wrong URL - DISCOURSE_HOSTNAME/DISCOURSE_S3_CDN_URL/assets/markdown-it-bundle-HASH.br.js

It actually looks lit a bug for this one, if I set DISCOURSE_CDN_URL variable, it still points to the wrong URL in this form DISCOURSE_HOSTNAME/DISCOURSE_CDN_URL/assets/markdown-it-bundle-HASH.br.js

it should point to DISCOURSE_S3_CDN_URL/assets/markdown-it-bundle-HASH.br.js

Other js assets are pointing to the right URL ’

I guess from what you are saying I will have other issues that I have not identified yet. Maybe you can give me more info on what could go wrong ?

If I undestand it well, js assets are on the object store, stylesheets should be on a CDN. WIthout a CDN could the stylesheets be delivered by the app as usual ? (from what i’m seeing its the case)

Thanks for your help

3 Likes

That is not a supported use case per the OP:

1 Like

Dear all,

I set up a new discourse server with Lightsail, using this guide for S3 uploads and backups setting-up-file-and-image-uploads-to-s3

After setup, I got the error shown “The bucket does not allow ACLs” on the screen when I upload an image

Here is my policy for S3:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectVersionTagging",
                "s3:CreateBucket",
                "s3:GetObjectAcl",
                "s3:GetBucketObjectLockConfiguration",
                "s3:PutLifecycleConfiguration",
                "s3:GetObjectVersionAcl",
                "s3:PutObjectTagging",
                "s3:DeleteObject",
                "s3:DeleteObjectTagging",
                "s3:GetBucketPolicyStatus",
                "s3:GetObjectRetention",
                "s3:GetBucketWebsite",
                "s3:ListJobs",
                "s3:DeleteObjectVersionTagging",
                "s3:GetObjectLegalHold",
                "s3:GetBucketNotification",
                "s3:PutBucketCORS",
                "s3:GetReplicationConfiguration",
                "s3:ListMultipartUploadParts",
                "s3:PutObject",
                "s3:GetObject",
                "s3:DescribeJob",
                "s3:PutObjectVersionAcl",
                "s3:GetAnalyticsConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetLifecycleConfiguration",
                "s3:GetAccessPoint",
                "s3:GetInventoryConfiguration",
                "s3:GetBucketTagging",
                "s3:GetBucketLogging",
                "s3:ListBucketVersions",
                "s3:ReplicateTags",
                "s3:ListBucket",
                "s3:GetAccelerateConfiguration",
                "s3:GetBucketPolicy",
                "s3:GetEncryptionConfiguration",
                "s3:GetObjectVersionTorrent",
                "s3:AbortMultipartUpload",
                "s3:PutBucketTagging",
                "s3:GetBucketRequestPayment",
                "s3:GetAccessPointPolicyStatus",
                "s3:GetObjectTagging",
                "s3:GetMetricsConfiguration",
                "s3:PutObjectAcl",
                "s3:GetBucketPublicAccessBlock",
                "s3:ListBucketMultipartUploads",
                "s3:ListAccessPoints",
                "s3:PutObjectVersionTagging",
                "s3:GetBucketVersioning",
                "s3:GetBucketAcl",
                "s3:GetObjectTorrent",
                "s3:GetAccountPublicAccessBlock",
                "s3:ListAllMyBuckets",
                "s3:GetBucketCORS",
                "s3:GetBucketLocation",
                "s3:GetAccessPointPolicy",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::mybucket-upload",
                "arn:aws:s3:::mybucket-upload/*",
                "arn:aws:s3:::mybucket-backup",
                "arn:aws:s3:::mybucket-backup/*"
            ]
        }
    ]
}

And here is my setup public access for S3 bucket:

Would someone help me to solve this issue, please?
Thanks so much
Cheers,
Quang

3 Likes

Should my staging site use the same S3 bucket as my production site?

1 Like

No, that would be very unsafe, and it could delete files that should still exist in the other environment and change files from the other environment (which could cause missing files, wrong files, and so on).

Both the buckets as well as the credentials should be different (and the staging credentials shouldn’t have access to the production bucket, specially regarding write and delete operations).

Maybe there’s a way using paths with different credentials for each path, but the chances of shooting your own foot are high, so I advise to use separate buckets.

5 Likes

DISCOURSE_CDN_URL and DISCOURSE_S3_CDN_URL need to be separate as well?

1 Like

I assume so, because if your staging and production domains/urls are different (they are, aren’t they?), then DISCOURSE_CDN_URL (that ends up pointing to the CDN provider, which points to your website domain) is expected to be different for staging and production. The same logic applies to DISCOURSE_S3_CDN_URL (because different buckets should have different urls).

3 Likes

Hey all, I’m pretty new to S3, so I’m not entirely sure how to phrase this, but I’ll try my best. So, I just switched to using S3 for uploads and backups and I have been using Discourse Connect in order to allow for logins on other parts of my site, but now profile images don’t work. I believe this has to do with CORS policies, but I’m not sure where I could configure it. I would ideally want to whitelist it for forum.domain.tld and domain.tld - or a wildcard on all subdomains would work too. Is this something I would set in Discourse, or where exactly? I’m using Vultr object storage if that makes a difference.

1 Like

Can versioning be enabled on the files S3 bucket? Is AWS Backup the recommended way to backup S3 buckets for Discourse?

1 Like

Yes.

Using versioning, or syncing to a different region are all good strategies.

4 Likes

I wanted to add one thing for Backblaze, as I just set this up and this might save others some time:

Master application key is not compatible with S3 API. You must create new application key (source).

And I wanted to ask 3 questions, to clarify some things:

  1. Is it normal to have a lot of missing .map files? They all seem to be from brotli_asset folder. They are neither on the server nor object storage.
  2. I’ve seen reports that DISCOURSE_S3_BUCKET was deprecated and DISCOURSE_S3_UPLOADS_BUCKET should be used. Which is the correct one?
  3. Is it necessary to add DISCOURSE_ENABLE_S3_UPLOADS: true? I’ve seen this mentioned in other topics.

Thanks.

2 Likes

Yes, that is a know bug in our asset pipeline that will be solved by the ongoing ember-cli migration.

Warning is correct, gotta update the wiki guide here in the OP.

Not mandatory at the moment, because I’m pretty sure it gets overwritten by the USE_S3 ENV, but I would have to dig into the codebase for a definite answer on that.

3 Likes

I’m working on a multisite instance where I tried to restore a database from another instance and when I restored database-only the main page rendered json that said that it was required. But that’s lkely an edge case. I was always confused by the DISCOURSE_S3_BUCKET env variable…

1 Like

I’m still curious about this if anyone has any insight, also I just had another question come up.

If I were wanting to change the domain of my Discourse installation, how would that impact Object Storage access policies? Would I need to change rules, or would that be taken care of for me by Discourse?

1 Like

@Falco

Have you seen this? https://blog.cloudflare.com/introducing-r2-object-storage/

I’ve already signed up for a test, looking forward to testing it.

2 Likes

I don’t know anyone who’s seen it.

I signed up for that test long ago, back in October. It doesn’t seem to be an actual product.

1 Like

Interesting.

I got an email about it roughly 2 weeks ago in regards about signing up for the test, it’s the only reason why I learned about it. I don’t follow the cloudflare blog. Hopefully it doesn’t get shoved into the back like the railgun though Argo is just so much better.

1 Like