Using Object Storage for Uploads (S3 & Clones)

I just went over some common Object Storage providers so I can attest if they work or not with Discourse.

Provider Service Name Works with Discourse?
Amazon AWS S3 Yes
Digital Ocean Spaces Yes
Linode Object Storage Yes
Google Cloud Storage Yes
Scaleway Object Storage Yes
Vultr Object Storage Yes
BackBlaze Cloud Storage Yes

If you got a different service working, please add it to this wiki.

Configuration

In order to store Discourse static assets in your Object Storage add this configuration on your app.yml under the hooks section:

  after_assets_precompile:
    - exec:
        cd: $home
        cmd:
          - sudo -E -u discourse bundle exec rake s3:upload_assets

When using object storage, you also need a CDN to serve what gets stored in the bucket. I used StackPath CDN in my testing, and other than needing to set Dynamic Caching By Header: Accept-Encoding in their configuration it works ok.

DISCOURSE_CDN_URL is a CDN that points to you Discourse hostname and caches requests. It will be used mainly for pullable assets: CSS and other theme assets.

DISCOURSE_S3_CDN_URL is a CDN that points to your object storage bucket and caches requests. It will be mainly used for pushable assets: JS, images and user uploads.

We recommend those being different and for admins to set both.

In the following examples https://falcoland-files-cdn.falco.dev is a CDN configured to serve the files under the bucket. The bucket name was set to falcoland-files in my examples.

AWS S3

What we officially support and use internally. Their CDN offering Cloudfront also works to front the bucket files.

In order to use it add this to the env section of your app.yml file, adjusting the values accordingly:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: us-west-1
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backups
  DISCOURSE_BACKUP_LOCATION: s3

Digital Ocean Spaces

DO offering is good and works out of the box. Only problem is that their CDN offering is awfully broken, so you need to use a different CDN for the files.

Example configuration:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: whatever
  DISCOURSE_S3_ENDPOINT: https://nyc3.digitaloceanspaces.com
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backups
  DISCOURSE_BACKUP_LOCATION: s3

Linode Object Storage

An extra configuration parameter, HTTP_CONTINUE_TIMEOUT, is required for Linode.

Example configuration:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: us-east-1
  DISCOURSE_S3_HTTP_CONTINUE_TIMEOUT: 0
  DISCOURSE_S3_ENDPOINT: https://us-east-1.linodeobjects.com
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
  DISCOURSE_BACKUP_LOCATION: s3

GCP Storage

Listing files is broken, so you need an extra ENV to skip that. Also skip CORS and configure it manually.

Example configuration:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: us-east1
  DISCOURSE_S3_INSTALL_CORS_RULE: false
  FORCE_S3_UPLOADS: 1
  DISCOURSE_S3_ENDPOINT: https://storage.googleapis.com
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
  DISCOURSE_BACKUP_LOCATION: s3

Scaleway Object Storage

Scaleway offering is also very good, and everything works fine.

Example configuration:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: fr-par
  DISCOURSE_S3_ENDPOINT: https://s3.fr-par.scw.cloud
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backups
  DISCOURSE_BACKUP_LOCATION: s3

Vultr Object Storage

An extra configuration parameter, HTTP_CONTINUE_TIMEOUT, is required for Vultr.

Example configuration:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: whatever
  DISCOURSE_S3_HTTP_CONTINUE_TIMEOUT: 0
  DISCOURSE_S3_ENDPOINT: https://ewr1.vultrobjects.com
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
  DISCOURSE_BACKUP_LOCATION: s3

Backblaze B2 Cloud Storage

You need to skip CORS and configure it manually.

Example configuration:

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: "us-west-002"
  DISCOURSE_S3_INSTALL_CORS_RULE: false
  DISCOURSE_S3_CONFIGURE_TOMBSTONE_POLICY: false
  DISCOURSE_S3_ENDPOINT: https://s3.us-west-002.backblazeb2.com
  DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
  DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
  DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
  DISCOURSE_S3_BUCKET: falcoland-files
  DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
  DISCOURSE_BACKUP_LOCATION: s3
38 Likes
Using Scaleway s3-compatible object storage
Setting up backup and image uploads to DigitalOcean Spaces
Defining DISCOURSE_S3_CDN_URL links to assets in S3 CDN URL
Migrate from AWS to Digital Ocean with 2 containers, spaces and 2 CDNs
Extend S3 configuration for other s3 API compatible cloud storage solutions
How to Setup BackBlaze S3 with BunnyCDN
Upload assets to S3 after in-browser upgrade
Migrating uploaded files from DO to S3
Discourse as a closed wiki
Imgur images broken
Admin role conflates server admin and board admin
Use WebTorrent to load media objects
Hosting Optimization with Digital Ocean
Hosting Optimization with Digital Ocean
Theme modifiers: A brief introduction
Which free storage for many images? also to be used for thumbnails etc
Disk usage spike during backup, Discourse crashed hard :-(
Migrate from AWS to Digital Ocean with 2 containers, spaces and 2 CDNs
Restore Failure - S3 (compatible) backup
Restore Failure - S3 (compatible) backup
Digitalocean block storage VS amazon S3
Digitalocean block storage VS amazon S3
Setting up backup and image uploads to Backblaze B2
Setting up backup and image uploads to DigitalOcean Spaces
Custom emoji don't use CDN for S3 stored assets in a few pages
Admin upgrade page doesn't load with a CDN
Install Discourse for Production Environment on Windows Server
Running Discourse on Azure Web Sites vs. Azure VM?
How to turn off S3 storage?
Access Denied error message when trying to upload images
What are the right settings to use S3 bucket (with non-Amazon URL)?
What are the right settings to use S3 bucket (with non-Amazon URL)?
REQ: Support S3 backup to a service like Backblaze
REQ: Support S3 backup to a service like Backblaze
Setting up backup and image uploads to Backblaze B2
Overwrite meta og:image image source to use externally public loaded images on topics?
How to store uploads with multiple web_only servers?
Setting up backup and image uploads to DigitalOcean Spaces
Comparing Discourse for Teams with Discourse
Topic List Previews
SMF2 Conversion and Rake to S3 Help
Running 2 hosts behind haproxy fails with random 404s
Site Blank After Rebuild
Migrate_to_S3 Fails on Rebake
Digital Ocean Spaces don’t implement the AWS S3 API for the CORS rule
Extend S3 configuration for other S3 API compatible services

Hi - I work on the team at Wasabi (another S3 ‘clone’) - we have several customers using Discourse with Wasabi (example below). Could we get Wasabi added to the table?

Thanks,
Jim

2 Likes

It’s a wiki, feel free to add it to the table and a new section with the list of environment variables that are needed for it.

1 Like

Please support “Alibaba Cloud” Object Storage Service :frowning:

1 Like

We support officially only AWS S3. What happens is that the S3 API became a de facto standard, so any other service that implements enough of the S3 API on their object storage service will just work.

Feel free to try alibaba object storage offering and add your findings to the OP.

2 Likes

Has anyone tried IBM Cloud (s3 clone)

1 Like

I tried using IBM Cloud S3 and it works! :tada:

4 Likes

Thanks for letting us know, @Soundarahari_P! The Original Post is a WIKI, if you would add a sectionn for how you configured your site to use IBM cloud that would be great!

3 Likes

Hey @Falco.

I just tried to restore a CDCK business-hosted site to one that I configured according to the AWS instructions and the restored failed wth

EXCEPTION: 21053 posts are not remapped to new S3 upload URL. S3 migration failed for db 'default'.

unless I defined DISCOURSE_CDN_URL.

I had DISCOURSE_CDN_URL defined for the site when it launches, but erroneously thought that I could do a test on a different domain name without DISCOURSE_CDN_URL defined.

Maybe everyone else catches that, but perhaps

is a bit too subtle and you should include a DISCOURSE_CDN_URL in each of the example stanzas?

OTOH, since this was a special case where I knowingly removed the DISCOURSE_CDN_URL perhaps I got what I deserved (but I wasted Simon’s time, for which I’m sorry).

4 Likes

Hello,

I have got this error while rebuild app. I use BackBlaze and works fine until now. :confused: Can someone help? Now my website down… :confused:

FAILED

--------------------

Pups::ExecError: cd /var/www/discourse && sudo -E -u discourse bundle exec rake s3:upload_assets failed with return #<Process::Status: pid 1706 exit 1>

Location of failure: /pups/lib/pups/exec_command.rb:112:in `spawn'

exec failed with the params {"cd"=>"$home", "cmd"=>["sudo -E -u discourse bundle exec rake s3:upload_assets"]}

9dfa662fd01b90431b0b336fb3404d447791ebcd8b6dc331d2e484fb70f541ca

** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.

./discourse-doctor may help diagnose the problem.

I, [2020-11-19T09:02:53.372225 #1] INFO -- : > cd /var/www/discourse && sudo -E -u discourse bundle exec rake s3:upload_assets

`/root` is not writable.

Bundler will use `/tmp/bundler20201119-1706-1gtrzcl1706' as your home directory temporarily.

rake aborted!

Seahorse::Client::NetworkingError: execution expired

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/net_http/connection_pool.rb:300:in `start_session'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/net_http/connection_pool.rb:99:in `session_for'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/net_http/handler.rb:123:in `session'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/net_http/handler.rb:75:in `transmit'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/net_http/handler.rb:49:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/plugins/content_length.rb:17:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/plugins/request_callback.rb:85:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/s3_signer.rb:124:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/s3_signer.rb:61:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/s3_host_id.rb:17:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/xml/error_handler.rb:10:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/transfer_encoding.rb:26:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/helpful_socket_errors.rb:12:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/s3_signer.rb:102:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/redirects.rb:20:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:348:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:382:in `retry_request'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:370:in `retry_if_possible'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:359:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:382:in `retry_request'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:370:in `retry_if_possible'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:359:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:382:in `retry_request'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:370:in `retry_if_possible'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/retry_errors.rb:359:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/dualstack.rb:38:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/accelerate.rb:58:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/http_checksum.rb:18:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/endpoint_pattern.rb:31:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/expect_100_continue.rb:21:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/bucket_name_restrictions.rb:26:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/bucket_dns.rb:35:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/arn.rb:49:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/rest/handler.rb:10:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/user_agent.rb:13:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/plugins/endpoint.rb:47:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/param_validator.rb:26:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/sse_cpk.rb:24:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/dualstack.rb:30:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/plugins/accelerate.rb:47:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:22:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/plugins/request_callback.rb:71:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/plugins/response_paging.rb:12:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/plugins/response_target.rb:24:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/seahorse/client/request.rb:72:in `send_request'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/poller.rb:65:in `send_request'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/poller.rb:51:in `call'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:107:in `block in poll'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:104:in `loop'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:104:in `poll'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:94:in `block (2 levels) in wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:93:in `catch'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:93:in `block in wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:92:in `catch'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:92:in `wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/waiters.rb:123:in `wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/bucket.rb:97:in `wait_until_exists'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/bucket.rb:78:in `exists?'

/var/www/discourse/lib/s3_helper.rb:276:in `s3_bucket'

/var/www/discourse/lib/s3_helper.rb:192:in `list'

/var/www/discourse/lib/tasks/s3.rake:15:in `should_skip?'

/var/www/discourse/lib/tasks/s3.rake:31:in `upload'

/var/www/discourse/lib/tasks/s3.rake:194:in `block (2 levels) in <main>'

/var/www/discourse/lib/tasks/s3.rake:193:in `each'

/var/www/discourse/lib/tasks/s3.rake:193:in `block in <main>'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'

/usr/local/bin/bundle:23:in `load'

/usr/local/bin/bundle:23:in `<main>'

Caused by:

Aws::Waiters::Errors::UnexpectedError: stopped waiting due to an unexpected error: execution expired

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:114:in `block in poll'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:104:in `loop'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:104:in `poll'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:94:in `block (2 levels) in wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:93:in `catch'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:93:in `block in wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:92:in `catch'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-core-3.109.2/lib/aws-sdk-core/waiters/waiter.rb:92:in `wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/waiters.rb:123:in `wait'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/bucket.rb:97:in `wait_until_exists'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/aws-sdk-s3-1.83.2/lib/aws-sdk-s3/bucket.rb:78:in `exists?'

/var/www/discourse/lib/s3_helper.rb:276:in `s3_bucket'

/var/www/discourse/lib/s3_helper.rb:192:in `list'

/var/www/discourse/lib/tasks/s3.rake:15:in `should_skip?'

/var/www/discourse/lib/tasks/s3.rake:31:in `upload'

/var/www/discourse/lib/tasks/s3.rake:194:in `block (2 levels) in <main>'

/var/www/discourse/lib/tasks/s3.rake:193:in `each'

/var/www/discourse/lib/tasks/s3.rake:193:in `block in <main>'

/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/rake-13.0.1/exe/rake:27:in `<top (required)>'

/usr/local/bin/bundle:23:in `load'

/usr/local/bin/bundle:23:in `<main>'

Tasks: TOP => s3:upload_assets

(See full trace by running task with --trace)

I, [2020-11-19T09:04:03.934947 #1] INFO -- : installing CORS rule

I, [2020-11-19T09:04:03.935688 #1] INFO -- : Terminating async processes

I, [2020-11-19T09:04:03.935942 #1] INFO -- : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/12/bin/postmaster -D /etc/postgresql/12/main pid: 49

I, [2020-11-19T09:04:03.936268 #1] INFO -- : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 166

2020-11-19 09:04:03.936 UTC [49] LOG: received fast shutdown request

166:signal-handler (1605776643) Received SIGTERM scheduling shutdown...

2020-11-19 09:04:03.941 UTC [49] LOG: aborting any active transactions

2020-11-19 09:04:03.946 UTC [49] LOG: background worker "logical replication launcher" (PID 58) exited with exit code 1

2020-11-19 09:04:03.947 UTC [53] LOG: shutting down

166:M 19 Nov 2020 09:04:04.008 # User requested shutdown...

166:M 19 Nov 2020 09:04:04.008 * Saving the final RDB snapshot before exiting.

2020-11-19 09:04:04.024 UTC [49] LOG: database system is shut down

166:M 19 Nov 2020 09:04:04.524 * DB saved on disk

166:M 19 Nov 2020 09:04:04.524 # Redis is now ready to exit, bye bye...

I have got the same error that is why i did a rebuild with no success :confused:

Hello again,

Is that work if i change BackBlaze to Digital Ocean Spaces?

Like this:

  1. Download manually files from BackBlaze…
  2. Create a DO Spaces without CDN
  3. Upload everything manually to the created DO Space
  4. Change the BunnyCDN Origin url to the DO Space url
  5. Change the app.yml file to the DO Space and rebuild
  6. Rebake posts…

Is this process workable? Thank you!

Do you have the cord and tombstone settings as well as both CDN urls?

Hi @pfaffman :slightly_smiling_face:

I use this as Falco described. With no Cors and tombstone because BackBlaze doesn’t support it.

And set the BunnyCDN origin url to BackBlaze.

2 Likes

It used to work and broke, right?

Do you have CDN for the site as well as S3? Adding the CDN for Discourse is what fixed my restore from another site yesterday.

I’ll try to give backblaze a try today.

2 Likes

Yes it works until today. It rebuild correctly always. Never happened this before.

I use BunnyCDN for site :arrow_down:
DISCOURSE_CDN_URL

and Discourse S3 is BunnyCDN with BackBlaze files, assets :arrow_down:
DISCOURSE_S3_CDN_URL

So these two are different pull zones :arrow_up:

What do you think about this idea? :slightly_smiling_face:

Is this workable? I think I should move to DO Spaces because the Tombstone feature is missing. :confused:

2 Likes

Sounds like either a temporary error or a token expiration. I’d wait a bit.

4 Likes

Hi Flaco,

I tried to generate new api keys on BackBlaze with no success :confused: or what kind of token? Sorry I lost with this :slightly_smiling_face:

1 Like

After generating new keys the error persists? Maybe contact backblaze support?

1 Like

Yes same issue :confused: I think they maybe change something without contacting me. Now I request a snapshot and I want to move because i don’t think it is fair if they no notify about changes. But definitely I’ll contact with BackBlaze.

Can I move to DO Spaces with that method what is write above? :slightly_smiling_face: Thanks for your answer! :slightly_smiling_face:

1 Like