I just went over some common Object Storage providers so I can attest if they work or not with Discourse.
Provider | Service Name | Works with Discourse? |
---|---|---|
Amazon AWS | S3 | Yes |
Digital Ocean | Spaces | Yes |
Linode | Object Storage | Yes |
Google Cloud | Storage | Yes |
Scaleway | Object Storage | Yes |
Vultr | Object Storage | Yes |
BackBlaze | Cloud Storage | Yes* |
Self-hosted | MinIO | Yes |
Azure Blob Storage | Flexify.IO | Yes |
If you got a different service working, please add it to this wiki.
Configuration
In order to store Discourse static assets in your Object Storage add this configuration on your app.yml under the hooks
section:
after_assets_precompile:
- exec:
cd: $home
cmd:
- sudo -E -u discourse bundle exec rake s3:upload_assets
When using object storage, you also need a CDN to serve what gets stored in the bucket. I used StackPath CDN in my testing, and other than needing to set Dynamic Caching By Header: Accept-Encoding
in their configuration it works ok.
DISCOURSE_CDN_URL is a CDN that points to you Discourse hostname and caches requests. It will be used mainly for pullable assets: CSS and other theme assets.
DISCOURSE_S3_CDN_URL is a CDN that points to your object storage bucket and caches requests. It will be mainly used for pushable assets: JS, images and user uploads.
We recommend those being different and for admins to set both.
In the following examples https://falcoland-files-cdn.falco.dev
is a CDN configured to serve the files under the bucket. The bucket name was set to falcoland-files
in my examples.
Choose your provider from the list below and add these settings to the env
section of your app.yml file, adjusting the values accordingly:
AWS S3
What we officially support and use internally. Their CDN offering Cloudfront also works to front the bucket files. See Setting up file and image uploads to S3 for how to configure the permissions properly.
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: us-west-1
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backups
DISCOURSE_BACKUP_LOCATION: s3
Digital Ocean Spaces
DO offering is good and works out of the box. Only problem is that their CDN offering is awfully broken, so you need to use a different CDN for the files.
Example configuration:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: whatever
DISCOURSE_S3_ENDPOINT: https://nyc3.digitaloceanspaces.com
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backups
DISCOURSE_BACKUP_LOCATION: s3
Linode Object Storage
An extra configuration parameter, HTTP_CONTINUE_TIMEOUT, is required for Linode.
Example configuration:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: us-east-1
DISCOURSE_S3_HTTP_CONTINUE_TIMEOUT: 0
DISCOURSE_S3_ENDPOINT: https://us-east-1.linodeobjects.com
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
DISCOURSE_BACKUP_LOCATION: s3
Google Cloud Platform Storage
Listing files is broken, so you need an extra ENV to skip that so assets can work. Also skip CORS and configure it manually.
Since you canāt list files you wonāt be able to list backups, and automatic backups will fail, we donāt recommend using it for backups. However, there might be a solution in this reply.
Example configuration:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: us-east1
DISCOURSE_S3_INSTALL_CORS_RULE: false
FORCE_S3_UPLOADS: 1
DISCOURSE_S3_ENDPOINT: https://storage.googleapis.com
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
#DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
#DISCOURSE_BACKUP_LOCATION: s3
Scaleway Object Storage
Scaleway offering is also very good, and everything works fine.
Example configuration:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: fr-par
DISCOURSE_S3_ENDPOINT: https://s3.fr-par.scw.cloud
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backups
DISCOURSE_BACKUP_LOCATION: s3
Vultr Object Storage
An extra configuration parameter, HTTP_CONTINUE_TIMEOUT, is required for Vultr.
Example configuration:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: whatever
DISCOURSE_S3_HTTP_CONTINUE_TIMEOUT: 0
DISCOURSE_S3_ENDPOINT: https://ewr1.vultrobjects.com
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
DISCOURSE_BACKUP_LOCATION: s3
Backblaze B2 Cloud Storage
You need to skip CORS and configure it manually.
There are reports of clean up orphan uploads
not working correctly with BackBlaze.
Example configuration:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: "us-west-002"
DISCOURSE_S3_INSTALL_CORS_RULE: false
DISCOURSE_S3_CONFIGURE_TOMBSTONE_POLICY: false
DISCOURSE_S3_ENDPOINT: https://s3.us-west-002.backblazeb2.com
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://falcoland-files-cdn.falco.dev
DISCOURSE_S3_BUCKET: falcoland-files
DISCOURSE_S3_BACKUP_BUCKET: falcoland-files/backup
DISCOURSE_BACKUP_LOCATION: s3
MinIO Storage Server
There are a few caveats and requirements you need to ensure are met before you can use MinIO storage server as an alternative to S3:
- You have a fully configured MinIO server instance
- You have Domain Support enabled in the MinIO configuration
- You have DNS configuration properly set up for MinIO so that bucket subdomains properly resolve to the MinIO server and the MinIO server is configured with a base domain (in this case,
minio.example.com
) - The bucket
discourse-data
exists on the MinIO server and has a āpublicā policy set on it - Your S3 CDN URL points to a properly configured CDN pointing to the bucket and cache requests, as stated earlier in this document.
- Your CDNs are configured to actually use a āHostā header of the core S3 URL - for example,
discourse-data.minio.example.com
when it fetches data - otherwise it can cause CORB problems.
Assuming the caveats and prerequisites above are met, an example configuration would be something like this:
DISCOURSE_USE_S3: true
DISCOURSE_S3_REGION: anything
DISCOURSE_S3_ENDPOINT: https://minio.example.com
DISCOURSE_S3_ACCESS_KEY_ID: myaccesskey
DISCOURSE_S3_SECRET_ACCESS_KEY: mysecretkey
DISCOURSE_S3_CDN_URL: https://discourse-data-cdn.example.com
DISCOURSE_S3_BUCKET: discourse-data
DISCOURSE_S3_BACKUP_BUCKET: discourse-backups
DISCOURSE_BACKUP_LOCATION: s3
DISCOURSE_S3_INSTALL_CORS_RULE: false
CORS is still going to be enabled on MinIO even if the rule is not installed by the app rebuilder - by default, it seems, CORS is enabled on all HTTP verbs in MinIO, and MinIO does not support BucketCORS (S3 API) as a result.
Azure Blob Storage with Flexify.IO
Azure Blob Storage is not an S3-compatible service, so it cannot be used with Discourse. There is a plugin, but it is broken.
The easiest way to expose an S3-compatible interface for Azure Blob Storage is to add a Flexify.IO server which translates the Azure Storage protocol into S3.
As of this writing, the service is free on Azure, and you only need a very basic (cheap) VM tier to start running it. It does, however, require a bit of setup.
- In Azure portal, create a new resource of
Flexify.IO - Amazon S3 API for Azure Blob Storage
. - For light usage, the minimum VM config seems to work just fine. You can accept most of the default config. Remember to save the PEM key file when you create the VM.
- Browse to the Flexify.IO VM link, and enter the system. Follow the instructions by setting up the Azure Blob Storage data provider and the generated S3 end-point. Make sure that the endpoint config setting
Public read access to all objects in virtual buckets
is true. Copy the S3 end-point URL and keys. - Press New Virtual Bucket and create a virtual bucket. It can be the same name as your Azure Blob Storage container, or it can be a different name. Link any container(s) to merge into this virtual bucket. This virtual bucket is used to expose a publicly-readable bucket via S3.
- By default, Flexify.IO installs a self-signed SSL certificate, while an S3 endpoint requires HTTPS. SSH into the VM using the key file (the username is by default
azureuser
), and replace the following files with the correct files:
-
/etc/flexify/ssl/cert.pem
- replace with certificate file (PEM encoding) -
/etc/flexify/ssl/key.pem
- replace with private key file (PKCS#8 PEM encoding)These files are root so youād have to
sudo
to replace them. It is best to make sure that the replacement files have the same ownership and permissions as the original ones, which meansroot:root
and600
permission.
- By default, Flexify.IO creates a root-level S3 service with multiple buckets. Discourse requires sub-domain support for buckets. Go to:
<your Flexify.IO VM IP>/flexify-io/manage/admin/engines/configs/1
which will open up a hidden config page! - Specify the S3 base domain (say it is
s3.mydomain.com
) in theEndpoint hostname
field, which should be blank by default. Press Save to save the setting. - Restart the Flexify.IO VM in Azure portal.
- In your DNS, map
s3.mydomain.com
and*.s3.mydomain.com
to the Flexify.IO VM IP. - In Discourse, set the following in the admin page (yes, there is no need for the settings to be in
app.yaml
):
use s3: true
s3 region: anything
s3 endpoint: https://s3.mydomain.com
s3 access key: myaccesskey
s3 secret assess key: mysecret key
s3 cdn url: https://<azure-blob-account>.blob.core.windows.net/<container>
s3 bucket: <virtual bucket>
s3 backup bucket: <backup bucket> (any container will do, as it does not require public read access and Flexify.IO will expose them automatically)
backup location: s3