If backup to s3 upload fails, backup is stored locally but ignores the backup limit

Multipart upload backups to Scaleway S3 aren’t working for me still. Backup Upload to S3 Fails on scaleway (multipart upload)

The problem is, if the upload fails, the backup remains stored locally like when doing normal local backups. Which is good. However the backup limit is ignored, so it will keep filling up the local disk indefinitely with backups until it causes downtime.

This is on 2.6.4 stable.

I’ve had a problem with one site using wasabi S3 that kept filling the disk. Neither discourse nor wasabi showed any errors in the logs.

I can’t remember now if I moved to a different S3 provider or if it’s been working. This is is up to date.


I don’t get it. Why are you trying to store backups on Scaleway S3 when you know that it doesn’t work? I think using a different S3 provider or setting the backup_location to “local” would the the best solution.

Anyway, Discourse should delete the local backup if the upload fails. You should see the message “Removing archive from local storage…” near the end of the backup log.


This wasn’t known to me when migrating from local to S3 storage. It isn’t possible to use a different S3 provider for backups than the main storage, meaning a full migration of the main storage would need to be done to use an alternative provider for backups.

Scaleway may be rolling out a fix (can see in the post I linked to), so rather than needing to rebuild several times with downtime just to check if it is working, I’ve been letting it try to succeed on a weekly basis, seeing that the backups that don’t successfully upload are kept in the local backups folder anyway.

It doesn’t, as I stated in my post. IIRC the message saying it will be removed still appears in the log, but that is not what happens in reality. If anyone is interested in understanding this edge case bug, I can check/confirm.