Because I know I’m going to have to eventually increase the size of my Digital Ocean server, but since that essentially doubles the monthly cost I’m putting it off as long as I can.
And after the latest update (when it switched to the new postgres), it seems I’ve got to the point where my backup can be made, but then fails gzipping the backup. Ironic.
Since I copy the backup off the server and store it elsewhere after it is created, the gzip doesn’t have any real value (slightly longer transfer time) and I can gzip it after I get it off the server for long term storage. Also, much of the size of the backup are the uploads, and the images and such will already be compressed.
But currently, because of the gzip process, the entire backup fails to operate.
So - is there a way to simply say don’t gzip the backup?
Block storage is pretty cheap, I think. You could use it for just backups or also uploads. I think it might be a bit tricky to see that the block storage gets used for the temporary space used to build the backup.
Can I use block storage just for the backups? If so, that might be an option to extend the time until I have to double the server unless it first creates it locally and then copies it to S3 in which case that would not help at all.
Yes. It would be the same idea as I described for the uploads. The thing I can’t remember is where the temporary files get written. I think it may be in the backup logs. You’d just make sure that you have that directory mapped to the extra storage space in your app.yml