Restroring problems

Based on the discourse_docker repository I have written a small script to automate using it inside a vagrant machine (run with set -x to see what is actually done)

Did I copy the backup to the wrong location?

And what the log line
Making sure /var/www/discourse/tmp/restores/default/2020-05-13-190832 exists... means?

~/infra/discourse on  master! ⌚ 21:14:07
$ pwd
/home/pihentagy/infra/discourse
~/infra/discourse on  master! ⌚ 21:01:14
$ ./wl.sh start
+ set -e
+ VAGRANT_MACHINE_NAME=guest
+ cd discourse
+ case "$1" in
+ init
+ printf 'Checking out and updating repo…\n'
Checking out and updating repo…
+ cd ..
+ git clone https://github.com/discourse/discourse_docker.git discourse
fatal: destination path 'discourse' already exists and is not an empty directory.
+ printf 'Repo already there\n'
Repo already there
+ cd discourse
+ printf 'Updating repo…\n'
Updating repo…
+ git pull -r
remote: Enumerating objects: 6, done.
remote: Counting objects: 100% (6/6), done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 6 (delta 2), reused 5 (delta 2), pack-reused 0
Unpacking objects: 100% (6/6), done.
From https://github.com/discourse/discourse_docker
   3e465a2..49ed141  master     -> origin/master
Created autostash: 36aae80
HEAD is now at 3e465a2 Remove all pg12 traces so pg_wrapper doesn't get confused
First, rewinding head to replay your work on top of it...
Fast-forwarded master to 49ed14152971f7f4a7437657987952be44c33c0a.
Applying autostash resulted in conflicts.
Your changes are safe in the stash.
You can run "git stash pop" or "git stash drop" at any time.
+ printf 'Copying config file…\n'
Copying config file…
+ cp ../resources/discourse.yml containers/
+ echo 'Starting Vagrant machine...'
Starting Vagrant machine...
+ vagrant up
Bringing machine 'dockerhost' up with 'virtualbox' provider...
==> dockerhost: Checking if box 'ubuntu/xenial64' is up to date...
==> dockerhost: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> dockerhost: flag to force provisioning. Provisioners marked to run always will still run.
+ vagrant ssh -c 'cd /vagrant;sudo ./launcher start discourse'
2627afdfbaac
Nothing to do, your container has already started!
Connection to 127.0.0.1 closed.
+ exit 0

~/infra/discourse on  master! ⌚ 21:07:56
$ ./wl.sh restore /home/pihentagy/infra/icontest-2020-05-12-033823-v20200506044956.tar.gz
+ set -e
+ VAGRANT_MACHINE_NAME=guest
+ cd discourse
+ case "$1" in
+ shift
+ backup=/home/pihentagy/infra/icontest-2020-05-12-033823-v20200506044956.tar.gz
+ discourse_backup_dir=shared/standalone/backups/default
+ mkdir --parents shared/standalone/backups/default
+ rsync -P --verbose /home/pihentagy/infra/icontest-2020-05-12-033823-v20200506044956.tar.gz shared/standalone/backups/default
icontest-2020-05-12-033823-v20200506044956.tar.gz
    390,774,609 100%  317.41MB/s    0:00:01 (xfr#1, to-chk=0/1)

sent 390,870,133 bytes  received 35 bytes  156,348,067.20 bytes/sec
total size is 390,774,609  speedup is 1.00
+ vagrant ssh -c 'sudo docker exec -w /var/www/discourse -i discourse discourse enable_restore'
Restore are now permitted. Disable them with `disable_restore`
Connection to 127.0.0.1 closed.
+ vagrant ssh -c 'sudo docker exec -w /var/www/discourse -i discourse discourse restore'
You must provide a filename to restore. Did you mean one of the following?

Connection to 127.0.0.1 closed.
+ vagrant ssh -c 'sudo docker exec -w /var/www/discourse -i discourse discourse restore icontest-2020-05-12-033823-v20200506044956.tar.gz'
Starting restore: icontest-2020-05-12-033823-v20200506044956.tar.gz
[STARTED]
'system' has started the restore!
Marking restore as running...
Making sure /var/www/discourse/tmp/restores/default/2020-05-13-190832 exists...
Copying archive to tmp directory...
EXCEPTION: lib/discourse.rb:90:in `exec': Failed to copy archive to tmp directory.
cp: cannot stat '/var/www/discourse/public/backups/default/icontest-2020-05-12-033823-v20200506044956.tar.gz': No such file or directory
lib/discourse.rb:100:in `execute_command'
lib/discourse.rb:90:in `exec'
lib/discourse.rb:40:in `execute_command'
/var/www/discourse/lib/backup_restore/local_backup_store.rb:42:in `download_file'
/var/www/discourse/lib/backup_restore/backup_file_handler.rb:61:in `copy_archive_to_tmp_directory'
/var/www/discourse/lib/backup_restore/backup_file_handler.rb:21:in `decompress'
/var/www/discourse/lib/backup_restore/restorer.rb:42:in `run'
script/discourse:143:in `restore'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/base.rb:485:in `start'
script/discourse:284:in `<top (required)>'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `load'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:63:in `kernel_load'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/cli/exec.rb:28:in `run'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/cli.rb:476:in `exec'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor.rb:399:in `dispatch'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/cli.rb:30:in `dispatch'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/vendor/thor/lib/thor/base.rb:476:in `start'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/cli.rb:24:in `start'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/exe/bundle:46:in `block in <top (required)>'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/lib/bundler/friendly_errors.rb:123:in `with_friendly_errors'
/usr/local/lib/ruby/gems/2.6.0/gems/bundler-2.1.4/exe/bundle:34:in `<top (required)>'
/usr/local/bin/bundle:23:in `load'
/usr/local/bin/bundle:23:in `<main>'
Trying to rollback...
There was no need to rollback
Cleaning stuff up...
Removing tmp '/var/www/discourse/tmp/restores/default/2020-05-13-190832' directory...
Unpausing sidekiq...
Marking restore as finished...
Notifying 'system' of the end of the restore...
Finished!
[FAILED]
Restore done.
Connection to 127.0.0.1 closed.

vagrant isn’t supported, but here’s some advice anyway. :wink:

I’m not sure how rsync works. Does the path need to end with a slash? If the file lands in the correct directory, make sure the copied file is readable by the server.

4 Likes

So can vagrant (virtualbox) cause some problems?

~/infra/discourse on  master! ⌚ 23:22:23
$ tree discourse/shared 
discourse/shared
└── standalone
    └── backups
        └── default
            └── icontest-2020-05-12-033823-v20200506044956.tar.gz

3 directories, 1 file

~/infra/discourse on  master! ⌚ 23:22:27
$ ls discourse/shared/standalone/backups/default 
icontest-2020-05-12-033823-v20200506044956.tar.gz

~/infra/discourse on  master! ⌚ 23:22:36
$ ls -l discourse/shared/standalone/backups/default
total 381620
-rw-r--r-- 1 pihentagy pihentagy 390774609 May 13 21:08 icontest-2020-05-12-033823-v20200506044956.tar.gz

Seems the file is readable for all and lands in the right place. From inside the discourse docker container where should I see the backup? Is my unattended restore script right or should I copy inside the docker container? If yes, where? (Maybe any guides for how to automate discourse from outside [of docker?]

+ vagrant ssh -c 'sudo docker exec -w /var/www/discourse -i discourse discourse restore'
You must provide a filename to restore. Did you mean one of the following?

Connection to 127.0.0.1 closed.
+ vagrant ssh -c 'sudo docker exec -w /var/www/discourse -i discourse discourse restore icontest-2020-05-12-033823-v20200506044956.tar.gz'
Starting restore: icontest-2020-05-12-033823-v20200506044956.tar.gz
[

It is not required, but not ending the path with a slash is bad practice since the result depends on the directory already existing or not.

If the destination directory already exists, there does not need to be a slash and the file is copied inside the directory.
If the destination directory does not exist and there is no slash at the end, the file is copied to a file ‘default’
If the destination directory does not exist and there is a slash at the end, the directory is created and the file is copied into it.

In this case the copying seems to go well (by luck).

However,

Since there are no suggestions after “did you mean one of the following?”, the file is not in the correct place. It looks like the drives are mapped incorrectly to the docker container.

You could start a backup from within Docker (discourse backup) and see where it ends up on your host filesystem.

3 Likes

Oddly it is not visible from the host filesystem. Should it?

vagrant@ubuntu-xenial:~$ sudo docker exec -w /var/www/discourse -i discourse discourse backup
Starting backup...
[STARTED]
'system' has started the backup!
Marking backup as running...
Making sure '/var/www/discourse/tmp/backups/default/2020-05-14-121930' exists...
Making sure '/var/www/discourse/public/backups/default' exists...
Updating metadata...
Pausing sidekiq...
Waiting for sidekiq to finish running jobs...
Dumping the public schema of the database...


Loooots of pg_dump stuff"

Unpausing sidekiq...
Finalizing backup...
Creating archive: discourse-2020-05-14-121930-v20200512064023.tar.gz
Making sure archive does not already exist...
Creating empty archive...
Archiving data dump...
Archiving uploads...
Removing tmp '/var/www/discourse/tmp/backups/default/2020-05-14-121930' directory...
Gzipping archive, this may take a while...
Executing the after_create_hook for the backup...
Deleting old backups...
Cleaning stuff up...
Removing '.tar' leftovers...
Marking backup as finished...
Refreshing disk stats...
Finished!
[SUCCESS]
Backup done.
Output file is in: /var/www/discourse/public/backups/default/discourse-2020-05-14-121930-v20200512064023.tar.gz

vagrant@ubuntu-xenial:~$ find / -name discourse-2020-05-14-121930-v20200512064023.tar.gz 2>/dev/null
vagrant@ubuntu-xenial:~$ sudo docer exec -w /var/www/discourse -i discourse discourse enable_restore
sudo: docer: command not found
vagrant@ubuntu-xenial:~$ sudo docker exec -w /var/www/discourse -i discourse discourse enable_restore
Restore are now permitted. Disable them with `disable_restore`
vagrant@ubuntu-xenial:~$ sudo docker exec -w /var/www/discourse -i discourse discourse restore
You must provide a filename to restore. Did you mean one of the following?

discourse restore discourse-2020-05-14-121930-v20200512064023.tar.gz
discourse restore discourse-2020-05-14-121710-v20200512064023.tar.gz
discourse restore discourse-2020-05-14-120436-v20200512064023.tar.gz

Off: any way to highlight lines in code blocks?

```bash

The default here is text. I believe that the default for new installs is ‘guess the language’,

I meant “look ma, line 5 is important!”, so basically highlight specific line(s).

Well, usually /var/discourse/shared/standalone/backups is the directory that is visible in your container as /var/www/discourse/public/backups (hence the word shared). Your rsync command is putting the backup inside that directory in order to make it accessible from within the container.

Vice versa, if the container writes to public/backups it should be visible on your host in the shared directory.

I wrote /var/discourse/shared.... above. But it seems that you are working in ~/infra/discourse, so you are copying to ~/infra/discourse/shared/standalone/backups/default.

Usually the container is mapped to /var/discourse/shared/...

This might be the issue. Can you check if you have a /var/discourse/shared ?

No. You can’t do that in a code block since everything is presented verbatim.

Well, now I just did a backup and checked if it can be found outside the docker machine and it cannot.

vagrant@ubuntu-xenial:~$ sudo docker inspect -f "{{.Mounts}}" discourse
[{bind  /var/discourse/shared/standalone /shared   true rprivate} {bind  /var/discourse/shared/standalone/log/var-log /var/log   true rprivate}]

True, but I just ignored it for now. Btw that rsync was done outside of the (vagrant) machine where I ran my discourse.

But for now, as you suggested did the following:

  • vagrant ssh to the machine and from inside:
    • did a backup with sudo docker exec -w /var/www/discourse -i discourse discourse backup
    • noticed the filepath:
      Output file is in: /var/www/discourse/public/backups/default/discourse-2020-05-14-125606-v20200512064023.tar.gz
      
    • searched the whole vagrant machine for that particular file, but found nothing
      find / -name discourse-2020-05-14-125606-v20200512064023.tar.gz 2>/dev/null
      
    • however if I enter the docker machine, it is there
      root@ubuntu-xenial-discourse:/var/www/discourse/public/backups/default# ls
      discourse-2020-05-14-120436-v20200512064023.tar.gz  discourse-2020-05-14-121930-v20200512064023.tar.gz
      discourse-2020-05-14-121710-v20200512064023.tar.gz  discourse-2020-05-14-125606-v20200512064023.tar.gz
      

So my question: if I create a backup, should I see it from outside of docker?

In the meantime I just create a vagrant machine, and git clone from inside in the standard /var/discourse directory. The only “oddness” is that I have a discourse.yml inside container, not app.yml.

Yes, and this is the same thing as what your original problem is:
“if you have a backup outside of docker and you put it in the shared directory, you should see it inside docker” (and you don’t).

The issue is with your shared directories and it is because of this:

I meant: for now I recreated a brand new vagrant machine, no copying of any previous backups. Did the bootstrapping and starting the docker container. Did the backup.

Nothing is seen in the vagrant machine outside of the docker machine.

I think I’ve found the issue: I’ve mounted this docker-shared folder in the enclosing vagrant machine.

config.vm.synced_folder "discourse/", "/var/discourse"

If I comment this out from my Vagrantfile, the backups “magically” appear.

So the problem was that Vagrant shared folder (to “up”) and docker shared folder (to “down”) compete and make it invalid.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.