Configure automatic backups for Discourse

So you’d like to automatically back up all your Discourse data every day?

Configuring daily backups

Go to the /admin settings, backup, and set backup_frequency to 1.

Now backup will be taken every day.

Store backups on the local server (default)

By default backups are stored on the local server disk.

If you’re self-hosted you can access them at /var/discourse/shared/standalone/backups/default.

Store backups on S3-compatible storage (using admin panel config)

You can also store backups on S3-compatible storage, such as AWS S3.

You can configure this either in the admin panel settings, or if you’re managing a self-hosted instance, also as environment variables in app.yml. For the latter, see further below.

In order to store backups on Amazon S3, you’ll need to create a unique, private S3 bucket and put it’s name in the s3_backup_bucket site setting.

You can follow most of the steps in Set up file and image uploads to S3 if you need help in setting up a new bucket.

:warning: WARNING

Storing backups and regular uploads in the same bucket and folder is no longer supported and will not work.

The s3_backup_bucket path should only be used for backups. If you need to use a bucket that contains other files please make sure that you provide a prefix when you configure the s3_backup_bucket setting (example: my-awesome-bucket/backups) and make sure that files with that prefix are private.

Next, set your S3 credentials under the Files section: s3_access_key_id, s3_secret_access_key, and s3_region.

Finally, you need to select “S3” as backup_location.

From now on all backups will be uploaded to S3 and not be stored locally anymore. Local storage will only be used for temporary files during backups and restores.

Go to the Backups tab in the admin dashboard to browse the backups – you can download them any time to do a manual offsite backup.

Store backups on S3-compatible storage (using environment variables)

You can also configure S3 backups using environment variables in app.yml. For more information, see Configure an S3 compatible object storage provider for uploads

Note that the above article covers app.yml covers S3 setup for backups and for file/image uploads. If you only want to use S3 for backups (and not for uploads of files/images), then you can omit the following parameters from your app.yml config:

  • DISCOURSE_USE_S3
  • DISCOURSE_S3_CDN_URL
  • DISCOURSE_S3_BUCKET

You also don’t need to configure the after_assets_precompile step in this case, nor configure a CDN.

Be sure to include all other parameters that are required for your storage provider, as mentioned in the article. Here is one example configuration that only activates S3 for backups (for Scaleway S3):

DISCOURSE_S3_REGION: nl-ams
DISCOURSE_S3_ENDPOINT: https://s3.nl-ams.scw.cloud
DISCOURSE_S3_ACCESS_KEY_ID: my_access_key
DISCOURSE_S3_SECRET_ACCESS_KEY: my_secret_access_key
DISCOURSE_S3_BACKUP_BUCKET: my_bucket/my_folder
DISCOURSE_BACKUP_LOCATION: s3

Archiving to storage with a lower cost

Note that on AWS S3, you can also enable an automatic move to Glacier bucket lifecycle rule to keep your S3 backup costs low. Other storage providers often have a similar offering.

Last edited by @JammyDodger 2024-05-27T12:05:42Z

Check documentPerform check on document:
57 Likes

You are able to Archive Backups from your S3 Bucket to Glacier.
It is cheaper, but an Restore attemps more Time.

This Site will Help you to reduce Backup costs.:

11 Likes

Setting this up can be rather confusing. Here’s a simple guide to help you out.

  • Log into your Discourse admin panel
  • Configure daily backups
  • Set maximum backups to 7
  • Log into your Amazon Web Services account
  • Go in the S3 Dashboard
  • Open the bucket containing the backups
  • Click on the properties tab
  • Activate versioning
  • Open the Lifecycle menu
  • Add a rule for the whole bucket
  • Set current version to expire after 15 days
  • Set previous version to
  • Archive to Glacier after 1 days
  • expire after 91 days
  • Save and logout

How it works

Versioning will keep backups automaticly deleted by Discourse. One day after beeing deleted it will be moved to the Glacier storage. After 91 days it will be delete from the Glacier storage.

Warning

Amazon charge you for item stored in Glacier for 90 days even if you delete them before. Make sure your Glacier Lyfecicle keep your file at least 90 days.

11 Likes

I may have missed this, but how to I make sure a bucket for backups is private the proper way? Setting up file and image uploads to S3 doesn’t seem to be described here.

1 Like

Looks like there is no such option anymore.

UPD: ah, now it’s split into 2 steps of this wizard.

So I guess it should be like this:

2 Likes

Does S3 backup functionality play nice without IAM access key and secret if using AWS roles assigned to the instance and s3 use iam profile is enabled in the settings menu?

Edit: The answer is YES! Just ensure you have region set appropriately.

1 Like

I had this problem too, which was resolved by adding a couple of lines to the policy document:

"Resource": [
        "arn:aws:s3:::your-uploads-bucket",
        "arn:aws:s3:::your-uploads-bucket/*",
        "arn:aws:s3:::your-backups-bucket",
        "arn:aws:s3:::your-backups-bucket/*"
      ]

Should this be added to the OP (which isn’t a wiki)? Or perhaps in the OP of Set up file and image uploads to S3?

2 Likes

Is there a good reason not to make daily backups aren’t the default?

Fixes needed for OP

Add something about moving S3 backups to Glacier?
Delete the S3 stuff and link to Configure an S3 compatible object storage provider for uploads

It’d be overkill and expensive. Same reason we don’t recommend changing the oil in your car every 500 miles?

You back up your servers once a week?

I think your 500 mile oil change analogy would be a good answer for “why not keep 30 backups?”, but it doesn’t make sense to me here.

Do you not think that if you’re going to have 5 backups most people would rather have backups every day so that if they do have to revert to a backup they’ve lost no more than a day? The only time I remember an older backup being useful is when images fell out of tombstone.

If they have so little disk space that they can’t hold 5 backups, it would seem better to learn that in 5 days when they still have some memory of what they did than wait 4-5 weeks.

1 Like

Yes, that is correct, for my self hosted servers.

If you prefer different settings, feel free to edit them from the defaults. What’s stopping you from doing so?

1 Like

Well, I’ll be!

I’m cleaning up topics like this one. If the defaults were changed, it wouldn’t be needed!

I’m convinced that the defaults won’t be changed, so I’ll move ahead. :wink

2 Likes

Once a week is a good starting point and a solid default.

1 Like

I had the very same problem!

i suggest to add a note regarding this on the initial post too!

3 Likes

Would be nice if this could be used to upload to a different S3 compatible provider, such as running MinIO.

If you want to use minio, see Using Object Storage for Uploads (S3 & Clones). If you want different services for backups and assets then you’re out of luck (though there are ways to set up triggers to copy from one bucket 8 another)

No, just want for backups, so they get automatically pushed to a different machine on my network.

Then the guide I linked is what you’re looking for.

1 Like

Is it possible to be more frequent than once per day? I’d rather not make assumptions and set backup freqeuncy to a float

1 Like

If you want more frequent backups you’ll need to do them externally.

3 Likes