Issues with AWS CDN and S3

,

@Falco I this still the case? I thought I read something about recent issues using AWS but I can’t find the topic again.

I am having numerous using AWS S3 using the various relevant topics as guides.

Backups are working as expected but using Cloudfront as CDN or uncommenting DISCOURSE_USE_S3 and/or DISCOURSE_S3_BUCKET causes a perpetual throbber to happen.

I suspect I have something misconfigured in the uploads bucket and/or the Cloudfront distribution but I have not been able to find a mistake. Both upload and backup buckets are behind the distribution and backups work fine so???

discourse-cdn.repealobbba.org CNAME —> amazonassigned.cloudfront.net

DISCOURSE_CDN_URL: https://discourse-cdn.repealobbba.org

## S3 storage config
#  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: us-east-1
  DISCOURSE_S3_ACCESS_KEY_ID: ACCESS_KEY_ID
  DISCOURSE_S3_SECRET_ACCESS_KEY: SECRET_ACCESS_KEY
  DISCOURSE_S3_CDN_URL: amazonassigned.cloudfront.net  or
#  DISCOURSE_S3_BUCKET: repeal-obbba-discuss-uploads
  DISCOURSE_S3_BACKUP_BUCKET: repeal-obbba-discuss-backups
  DISCOURSE_BACKUP_LOCATION: s3

Additionally, adding this to config

    after_assets_precompile:
    - exec:
        cd: $home
        cmd:
          - sudo -E -u discourse bundle exec rake s3:upload_assets
          - sudo -E -u discourse bundle exec rake s3:expire_missing_assets

Gives FAILED TO BOOTSTRAP error

FAILED
--------------------
Pups::ExecError: cd /var/www/discourse && sudo -E -u discourse bundle exec rake s3:upload_assets failed with return #<Process::Status: pid 8484 exit 1>
Location of failure: /usr/local/lib/ruby/gems/3.3.0/gems/pups-1.3.0/lib/pups/exec_command.rb:131:in `spawn'
exec failed with the params {"cd"=>"$home", "cmd"=>["sudo -E -u discourse bundle exec rake s3:upload_assets", "sudo -E -u discourse bundle exec rake s3:expire_missing_assets"]}
bootstrap failed with exit code 1
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.

as always… Any thoughts or suggestions would be appreciated.

You need to add the stanza that uploads assets to s3.

Oh. I’m wrong, you’re doing it.

That suggests something is wrong with the bucket configuration.

I think there used to be a topic that would generate the json to configure a bucket, but I don’t know if it’s still around.

1 Like

Agreed
Although a bucket policy was not used on backups bucket a adding a policy to uploads bucket cleared the bootstrap failure.

The policy json is available at CloudFront>Distributions>your distribution>Edit origin
Screenshot 2025-12-10 141220

Unfortunately the perpetual throbber persists.

Adjusting Object Ownership and ACLs does not change result.

Current settings. I believe they are the recommended settings or perhaps I am confused.


After you change the settings you need to run the rant task to upload the the assets.

Also, you can open the developer console and see if the files is trying to access exist on the bucket or if there’s an issue with the cdn

1 Like

Thanks for continuing…

Yes, rake tasks are run, makes no difference.

./launcher enter app
rake posts:rebake
rake uploads:migrate_to_s3
rake posts:rebake_uncooked_posts

Throbber persists.

rake uploads:migrate_to_s3 does yield an error

Migrating uploads to S3 for 'default'...
Some uploads were not migrated to the new scheme. Running the migration, this may take a while...
rake aborted!
FileStore::ToS3MigrationError: Some uploads could not be migrated to the new scheme. You need to fix this manually. (FileStore::ToS3MigrationError)
/var/www/discourse/lib/file_store/to_s3_migration.rb:156:in `migrate_to_s3'
/var/www/discourse/lib/file_store/to_s3_migration.rb:59:in `migrate'
/var/www/discourse/lib/tasks/uploads.rake:126:in `migrate_to_s3'
/var/www/discourse/lib/tasks/uploads.rake:106:in `block in migrate_to_s3_all_sites'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management/null_instance.rb:49:in `with_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management/null_instance.rb:36:in `each_connection'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/rails_multisite-7.0.0/lib/rails_multisite/connection_management.rb:17:in `each_connection'
/var/www/discourse/lib/tasks/uploads.rake:104:in `migrate_to_s3_all_sites'
/var/www/discourse/lib/tasks/uploads.rake:100:in `block in <main>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => uploads:migrate_to_s3

At least one CDN checker shows discourse-cdn.repealobbba.org -->Amazon CloudFront

and console still shows images and js files are being called from the CDN

The assets are called from the cdn, but are they there? If not, are they in the bucket? Are they accessible from the bucket?

You likely have some uploads in a different bucket or something that keeps them from being in the expected place. If so, you’ll need to fix those by hand like it sats. You’ll need to access the console and see where they are on the upload record.

Does the upload assets task seem to be working? The throbber keeps running because the other assets are no loading.

1 Like

Yes they are there, files are in the bucket as assets
Accessible: https://repeal-obbba-discuss-uploads.s3.us-east-1.amazonaws.com/assets/logo-815195ae.png

Only two buckets, uploads and backups. Backups are working fine.

rake s3:upload_assets gave error:
rake aborted!
Aws::S3::Errors::AccessControlListNotSupported: The bucket does not allow ACLs (Aws::S3::Errors::AccessControlListNotSupported)

Switched to ACLs enabled and ran s3:upload_assets edit: uploads:migrate_to_s3 again but…
FileStore::ToS3MigrationError: Some uploads could not be migrated to the new scheme. You need to fix this manually. (FileStore::ToS3MigrationError)

??? You need to fix this manually. (FileStore::ToS3MigrationError)

It sounds like you fixed the bucket and it’s now able to upload assets, so the site should work now. Dealing with the uploads that won’t migrate is another (complicated) issue.

Here’s one of the assets you’re trying to serve:

https://discourse-cdn.repealobbba.org/assets/start-discourse-6f03a463.br.js

The cert is broken, so that’s your problem. There is a cert, but it doesn’t match the URL.

As I suggested earlier, look at the network tab of the dev tools in your browser and see this:

1 Like

edit from above
Switched to ACLs enabled and ran s3:upload_assets edit: uploads:migrate_to_s3 again

s3:upload_assets completes without issues now

I see the network errors. Is the cert an AWS issue?

Thanks again for your time on this!!

Yes. The cert doesn’t match the host name. It matches *.cloudfront.net

This works: https://repeal-obbba-discuss-uploads.s3.us-east-1.amazonaws.com/assets/logo-815195ae.png

This does not work: https://discourse-cdn.repealobbba.org/assets/start-discourse-6f03a463.br.js

This works: https://repeal-obbba-discuss-uploads.s3.us-east-1.amazonaws.com/assets/start-discourse-6f03a463.br.js

So you need to change your s3 CDN to https://repeal-obbba-discuss-uploads.s3.us-east-1.amazonaws.com – oh, I guess that’s the bucket address, so that wouldn’t be ideal, but it would work.

1 Like

Not to keen on “so that wouldn’t be ideal” :wink:

Looking at AWS Alternate domain names as a possible solution.

Maybe it is just me but adding a CDN that @Discourse uses internally should maybe not be so difficult and require the kind support from folks like @pfaffman

Maybe @Falco or @sam and @team (can’t mention team) can chime in??

Yes, all the sites we host use the S3 plus CloudFront combo. Even this site you are browsing right now.

2 Likes

Thanks for confirming! After seeing this I re-adjusted DISCOURSE_S3_CDN_URL: back to amazonassigned.cloudfront.net hoping it my clear the cert and url issues.

The rebuilt and re ran all the rakes mentioned in docs.
rake posts:rebake
rake uploads:migrate_to_s3 still generates FileStore::ToS3MigrationError
rake posts:rebake_uncooked_posts

rake s3:upload_assets
rake s3:expire_missing_assets

Still no joy on getting the site to load.

Any suggestions?

Turns out this was helpful.