I saw another post about this from '14 - but no answer and it seemed like they were making this harder than it should be…
I would like some pointers to see if there is an AWS endpoint configuration within app.yml which could be changed to allow use of google cloud storage instead of s3 - GCS has a compatability mode where all - as I understand it - would need to happen is to change the AWS endpoint:
In your existing tools or libraries, make the following changes:
Change the request endpoint to use the Google Cloud Storage request endpoint.
Replace the Amazon Web Services (AWS) access and secret key with the corresponding Google Cloud Storage access key and secret key (collectively called your Google developer key).
This could be added as an option under /admin/site_settings/category/files and option for AWS endpoint along with the access keys? or can it be done in the app.yml? [AWS_ENDPOINT = “/xxxx/xxx”] already?
/bin/bash
This script will copy the latest discourse backup to google cloud
Then it will remove the previous backup from google cloud
LATESTBACKUP=`ls /var/discourse/shared/standalone/backups/default/*.gz -t1 | head -n1`
PREVBACKUPWDIR=`ls /var/discourse/shared/standalone/backups/default/*.gz -1tr | head -1`
PREVBACKUP=`find $PREVBACKUPWDIR -exec basename {} \;`
/home/user/gsutil/gsutil cp $LATESTBACKUP gs://yyh-community-backups
/home/user/gsutil/gsutil rm gs://your-backup-bucket/$PREVBACKUP
Which works, but I still think greater use of google cloud storage could be used with the s3 compatibility mode as described above. But maybe I’m the only one who wants to use non amazon tools.
This is probably verging on paranoia, but I store Discourse backups on AWS S3 (which is supported out of the box) and then sync the backups from S3 to GCS using:
No programming required. This will increase your storage costs slightly, so it doesn’t help if you’re trying to save money. I keep the latest 10 backups, each are 10 GB, and it costs me $3/month at Google.