Local backups broken in docker + Digital Ocean install


(Wes Osborn) #1

Continuing the discussion from My backups appear to have broken:

Installed discourse 0.9.9.3 via docker install on an ubuntu Digital Ocean droplet. This server isn’t doing anything else except discourse. We are attempting to run a local backup of discourse so we can actually migrate it to a local VM. When I attempt to take a manual backup, the option is greyed out, it appears that a backup is already running.


When I cancel the job and attempt to start another one, it says the job is already running.

When I look at the logs section it is blank, all I see is a spinner.

I have done a ./launcher rebuild app and I have also rebooted the server. I also did a ./launcher logs app and don’t really see anything interesting there. I also checked /sidekiq and have tried to trigger the Jobs::CreateBackup manually. When I trigger that process manually from /sidekiq it returns with OK under the Last Result column.

Any additional idea as to some places I could look for logs to see what might be going on here?


(Sam Saffron) #2

Is there anything in

/var/docker/shared/log/rails/production.log ?

cc @zogstrip


(Wes Osborn) #3

It’s pretty bare in there too (tailed while attempt to cancel/restart the job) and here is what I got:

Processing by Admin::BackupsController#status as JSON
Completed 200 OK in 17ms (Views: 0.5ms | ActiveRecord: 12.6ms)
Started GET "/admin/backups.json" for 23.123.250.237 at 2014-04-29 02:13:41 +0000
Processing by Admin::BackupsController#index as JSON
Completed 200 OK in 6ms (Views: 0.4ms | ActiveRecord: 0.0ms)
Started GET "/admin/backups/cancel.json" for 23.123.250.237 at 2014-04-29 02:13:45 +0000
Processing by Admin::BackupsController#cancel as JSON
Completed 200 OK in 2ms (Views: 0.4ms | ActiveRecord: 0.0ms)
Started POST "/admin/backups" for 23.123.250.237 at 2014-04-29 02:14:02 +0000
Processing by Admin::BackupsController#create as */*
Completed 200 OK in 11ms (Views: 0.7ms | ActiveRecord: 0.0ms)

Just a note for others, this was actually in /var/docker/shared/standalone/log/rails on my install.


(Régis Hanol) #4

Hhmm… Seems like a previous backup/restore operation went wrong and did not finish properly.

What do you have in the /var/docker/shared/standalone/logs/rails/unicorn.stdout.log?

Do you see any [SUCCESS] or [FAILURE]?


(Wes Osborn) #5

There was a failure in an attempt to backup to our S3 bucket:

    Marking backup as finished...
Finished!
[FAILED]
    [fog][WARNING] fog: the specified s3 bucket name(discourse_backup.clcohio.org) is not a valid dns name, which will negatively impact performance.  For details see: http://docs.amazonwebservices.com/AmazonS3/latest/dev/BucketRestrictions.html
    [fog][WARNING] fog: the specified s3 bucket name(discourse_backup.clcohio.org) is not a valid dns name, which will negatively impact performance.  For details see: http://docs.amazonwebservices.com/AmazonS3/latest/dev/BucketRestrictions.html

Since we were having issues with S3, we had just turned it off/disabled it (removed the S3 entries as well) but maybe this failed s3 backup then “gummed” up the works?


(Régis Hanol) #6

Most likely. Using a bucket name containing dots is not really supported. Does it work now?


(Jeff Atwood) #7

I think we should flat out throw an error if someone specifies a s3 bucket with a period in it. This has come up too many times and results in way too many support issues.

(We already updated the HOWTO and the help for this field to tell people not to use periods in S3 buckets…)


(Wes Osborn) #8

No it still doesn’t work. Even after removing all S3 info in the setup.


(Wes Osborn) #9

We manually moved over the information that we needed to for our configuration. My guess is that this is an issue where a failed S3 backup kills the backup process for local backups. Maybe if we were to get an successful S3 backup going, things would have been fine, but for us it was just easier to do what we needed to manually to move over the important parts of the configuration.


(Bradley Boven) #10

Was there ever a solution to this? I’m having the same issue in that the backup job always appears to be running and even if I cancel it, as soon as I reload the page it says it is creating it again. There are also no logs in the logs tab.

Can I manually purge the backup job?


(Régis Hanol) #11

If you’re using our recommended setup, here’s how to do it:

# ssh into your server
cd /var/docker
./launcher ssh app
rails c
BackupRestore.mark_as_not_running!

(Passante) #12

I’m in the same trouble with latest dosker image I can i solve? the API seems changed