Last week I finally completed the full migration of my 120000+ upload files and then realised that the backups were all failing to complete from that day on.
Actually, entering the backup page in the Discourse setting is showing no backup at all, just the circling icon that keeps turning until it times out with an error 502 after about 30 seconds. See picture below.
My hypothesis is that due to the large number of files stored in my DO Spaces bucket, in case the backup routine is performing a listing of the bucket content in order to check for something, then the waiting time for the response from the DO Spaces infrastructure is set too low, hence the backup process fails.
In the meantime I restored the setting for the backups to be stored locally, but still I’d assume that this is not nominal behavior, although it might be more a DO performance issue rather than a Discourse backup routine issue.
Long waiting time for DO infrastructure might have to be handled better, or maybe a listing of the content based on a aming convention shall be performed rather then trying to pull the entire listing of the file stored in the bucket (assuming this is what happens).
You probably configured the same bucket for uploads and backups before we disallowed it because it’s causing timeouts. Either create a new bucket for backups or append a path to the s3_backup_bucket. Here’s an example of how your settings could look afterwards.
s3_upload_bucket :my-bucket
s3_backup_bucket :my-bucket/backups
You might want to move existing backups into the new folder. I’m sure DO has something similar to the AWS S3 Console.
Thanks a lot @gerhard. I actually just created a different bucket and changed the setting to point there (I mean, I now have one bucket for the uplinks an one bucket for the backups).
I run a manual backup twice, and it worked. I can see the files are in the bucket via the DigitalOcean interface, but the dedicated Discourse Backup page still times out with the same error when trying to do a listing of the bucket content.
Those backups are being stored in backups/multisitename, (so it would seem like the stuff I’ve seen lately about not being able to share buckets might not need to be true anymore, but that’s for another topic).
I thought that I’d set the upload bucket in one of the sites on this multisite install to lc-backups/uploads or maybe lc-backups/uploads/sitename, but it fails with: You cannot use the same bucket for 's3_upload_bucket' and 's3_backup_bucket'. Choose a different bucket or use a different path for each bucket.
So my solution of a prefix for the images bucket was the only one that didn’t work.
That explains it. Thanks!
FWIW, it would seem that if the multisite default of sticking backups in a backups directory would solve the two-bucket problem, but you’ve probably mucked with this code enough for now.