Backups can be uploaded to S3, but I don’t use S3 and would rather avoid it. I would still like off site backups. Is there a way to download them using, say, curl? Using basic authentication does not work, and the examples I find for using the API key are all for POST requests where it can be added to the JSON data, but the download links for backup files are simple GETs… Any guidance?
Solving my own issue here. The backups can be downloaded using the API key as follows:
curl -LO https://site.example.com/admin/backups/predictable-filename.tar.gz?api_key=...&api_username=youruser
My filenames don’t seem to be predictable. The date is, but the minutes and seconds are not. Does anyone know of a solution? Thanks.
It’s been a while since I did a backup of my localhost so things may have changed since Feb 22, 2015 4:09 AM
I agree, the “150222” I can recognize as corresponding to the date (yymmdd), but I’m having trouble seeing “090655-52” as the time.
I’m guessing that the focus of calmh’s post was more about using the API key, and that his use of the word “predictable” was used as an “alias” for the actual file name and not meant to imply that the file name was “human translatable” into date and time.
I’m also thinking that “090655-52” is not time, but a unique identifier.
What exactly is the problem you’re having?
I want to do a curl request daily to automatically download the backups the way @calmh described, but I need some sort of predictable file names to grab the download. My filenames have varying dates and times.
You can always use an API call to get a list of the current backups which includes the filename.
Using the endpoint
/admin/backups.json would return something like this:
Connect my local discourse to my discourse on vps via git
#!/bin/bash CRED="api_key=[KEY]&api_username=[USER]" wget "https://domain.com/admin/backups.json?$CRED" -O tmp.json URL="https:"$(cat tmp.json | sed 's/\"/\n/g' | grep forumname | head -1) rm tmp.json TMP_DATE=$(date "+%Y-%m-%d") wget $URL?$CRED -O forumname-$TMP_DATE.tar.gz
Problems with downloading backups
With curl and automatic purging (keeping last 7 backups):
#!/bin/bash DIR="/path/to/backup/dir" CRED="api_key=[KEY]&api_username=[USER]" JSON=$(curl -s "https://[HOST]/admin/backups.json?$CRED" | sed 's/\"/\n/g' ) DUMP_URL=$(echo "$JSON" | sed 's/\"/\n/g' | grep https | sort -r | head -1) DUMP=$(echo "$DUMP_URL" | cut -d/ -f6) if [[ -e "$DIR/$DUMP" ]]; then echo "Already got the latest: $DUMP" else echo "Downloading the latest: $DUMP" curl -# "$DUMP_URL?$CRED" -o "$DIR/$DUMP" echo "Keeping last 7 backups" ls -1 $DIR/*.tar.* | sort -r | tail -n+7 | xargs -r rm fi echo "Done."