Automatically download backups?


(Jakob Borg) #1

Backups can be uploaded to S3, but I don’t use S3 and would rather avoid it. I would still like off site backups. Is there a way to download them using, say, curl? Using basic authentication does not work, and the examples I find for using the API key are all for POST requests where it can be added to the JSON data, but the download links for backup files are simple GETs… Any guidance?


(Jakob Borg) #2

Solving my own issue here. The backups can be downloaded using the API key as follows:

curl  -LO https://site.example.com/admin/backups/predictable-filename.tar.gz?api_key=...&api_username=youruser

(Craig Bowes) #3

My filenames don’t seem to be predictable. The date is, but the minutes and seconds are not. Does anyone know of a solution? Thanks.


(Mittineague) #4

It’s been a while since I did a backup of my localhost so things may have changed since Feb 22, 2015 4:09 AM

user-archive-Mittineague-150222-090655-52.csv.gz

I agree, the “150222” I can recognize as corresponding to the date (yymmdd), but I’m having trouble seeing “090655-52” as the time.

I’m guessing that the focus of calmh’s post was more about using the API key, and that his use of the word “predictable” was used as an “alias” for the actual file name and not meant to imply that the file name was “human translatable” into date and time.

I’m also thinking that “090655-52” is not time, but a unique identifier.

What exactly is the problem you’re having?


(Craig Bowes) #5

I want to do a curl request daily to automatically download the backups the way @calmh described, but I need some sort of predictable file names to grab the download. My filenames have varying dates and times.


(Dean Taylor) #6

You can always use an API call to get a list of the current backups which includes the filename.

Using the endpoint /admin/backups.json would return something like this:

[{"filename":"example-site-2015-04-07-030229.tar.gz","size":4270195399,"link":"//example.com/admin/backups/example-site-2015-04-07-030229.tar.gz"}]

Connect my local discourse to my discourse on vps via git
(Jakob Borg) #7

This was about site backups. On at least my forum, those names are nicely predictable:


(Peter) #8

I had the same unpredictable name problem as @fregas … so here is a small bash script fetching the latest backup - replace [USER], [KEY], forumname and domain.com:

#!/bin/bash

CRED="api_key=[KEY]&api_username=[USER]"
wget "https://domain.com/admin/backups.json?$CRED" -O tmp.json
URL="https:"$(cat tmp.json | sed  's/\"/\n/g' | grep forumname | head -1)
rm tmp.json
TMP_DATE=$(date "+%Y-%m-%d")
wget $URL?$CRED -O forumname-$TMP_DATE.tar.gz

Problems with downloading backups
(Marcin Rataj) #9

With curl and automatic purging (keeping last 7 backups):

#!/bin/bash
DIR="/path/to/backup/dir"
CRED="api_key=[KEY]&api_username=[USER]"
JSON=$(curl -s "https://[HOST]/admin/backups.json?$CRED" | sed  's/\"/\n/g' )
DUMP_URL=$(echo "$JSON" | sed  's/\"/\n/g' | grep https | sort -r | head -1)
DUMP=$(echo "$DUMP_URL" | cut -d/ -f6)
if [[ -e "$DIR/$DUMP" ]]; then
    echo "Already got the latest: $DUMP"
else
    echo "Downloading the latest: $DUMP"
    curl -# "$DUMP_URL?$CRED" -o "$DIR/$DUMP"
    echo "Keeping last 7 backups"
    ls -1 $DIR/*.tar.* | sort -r | tail -n+7 | xargs -r rm
fi
echo "Done."

(Peter) #10

This is no longer possible: Problems with downloading backups