Backup frequency/count setting ignored?

(Tom Newsom) #1

Despite these settings:

I’ve recently been finding my discourse in a 503 state due to low disk space and I have to manually delete backups, which are happening every day until the disk runs out.


(Allen - Watchman Monitoring) #2

Was that on latest, which would have included this feature/fix?

(Jay Pfaffman) #3

I think that there might be a bug in which backups that fail because there wasn’t enough space to gzip them don’t get deleted.

(Rafael dos Santos Silva) #4

Exactly. You didn’t had space to finish a proper backup, so the step where they get deleted didn’t happen. Note that the files are tar and not tag.gz.

Clean your backup folder, and it will work well after that.

PS: Running out of space for basic operations isn’t a bug.

(Allen - Watchman Monitoring) #5

To run out of space for backups is human, to automatically warn of failure is awesome.

@tom_newsom to me the question is… were you on tests-passed and if so, were you notified?

(Michael - #6

Leaving temporary files all over the place is, especially when they contain sensitive data.

(Rafael dos Santos Silva) #7

temporary files = uncompressed backups
all over the place = the backups folder


I agree that a notification to admins on failed backups is a good idea for a feature, but the topic title is about a site setting being ignored, and that was caused by insufficient space to have a working backup.

PS: The notification was tracked and implemented here: If automatic backup fails, there should be a warning. Please report if it’s broken.

(Michael - #8

And the tmp folder within the Discourse directory

(Tom Newsom) #9

Thanks for the replies everyone. Time to upgrade my storage then!

(Steven Merrill) #10

I’m having the same problem where frequency is set to 7 and there is enough disk space to turn the .tar into a .tar.gz, and yet it’s still taking a backup every day and also uploading it to S3. Any ideas on how start troubleshooting this?