The backup process creates a tar file and then applies gzip to it. There are two types of things in the tar file: an already gzipped sql dump and the contents of uploads (if requested.) In my case every upload file is already compressed: gz, gzip, gif, jpeg, png, zip. So the final gzipping gains only 1% of size.
I believe it would be better to demand less free space.
A previous topic from 2016 mentions disabling backup compression, but it looks like the sql dump was at that time not compressed, which shifted the tradeoffs.
I’m aiming to save cpu time. Actually, I was thinking of using the 0 as a flag that would change the code path so that it doesn’t gzip (sadly, zero is not a valid compression level supported across all gzip versions, afaik).
Hmm that wouldn’t help me at all! (Likewise others who’ve had the same problem with limited disk space.)
If tar were being used, it could be used with z or j options. If a subshell were being used, the output of tar could be piped into gzip. But I think in fact some higher level ruby functions may be in use.
Maybe it shouldn’t be too difficult… I appreciate that making changes to backup and restore must be made with great care, but I think just inlining the compression would save a lot of space requirement without any compatibility question.
From tar --help
-a, --auto-compress use archive suffix to determine the compression
-z, --gzip, --gunzip, --ungzip filter the archive through gzip