Restore a backup from command line

Here’s how to restore a Discourse backup from the command line, without ever booting the Discourse web UI. This is handy when you’re moving servers.

Prerequisites

  • Download the latest backup file from source Discourse instance.
  • The destination discourse needs to be bootstrapped (run ./discourse-setup or copy your existing app.yml)
  • Make sure that the destination Discourse instance is on the latest version. Update it if necessary.

Transfer Backup

SSH into the destination server, or otherwise create the backup folder there:

mkdir -p /var/discourse/shared/standalone/backups/default

Upload your backup file to the destination server.

scp /path/to/backup/backup.tar.gz root@192.168.1.1:/var/discourse/shared/standalone/backups/default

Of course, replace the above paths, filenames, and server names with the ones you are using – but you do want the backup file to end up in

/var/discourse/shared/standalone/backups/default

:mega: You can also upload and download your Discourse backup file from popular web storage sites such as Google Drive, Dropbox, OneDrive, etc – you’ll need to look up the specific command line instructions based on your preferred web storage provider.

:warning: DO NOT CHANGE THE FILENAME OF THE BACKUP! Discourse treats the backup filename as metadata, so if you change the filename, restoring will not work. Stick with the original file name.

Replace /path/to/backup/discourse-xyz.tar.gz with the local path of your backup file, and replace <server_ip_address> with the IP address of destination server.

Restore Backup

Access your destination server and go to the Discourse folder

cd /var/discourse

Enter the Discourse Docker app container

./launcher enter app

From inside the Docker container, enable restores via

discourse enable_restore

Restore the backup file

discourse restore sitename-2019-02-03-042252-v20190130013015.tar.gz

Exit the Discourse Docker app container

exit

Rebuild

After the restore process is complete rebuild the destination instance.

:mega: Now is a good time to update /var/discourse/containers/app.yml with full HTTPS, additional plugins or CDN configuration. Compare the app.yml configuration of both instances to make sure!

cd /var/discourse
./launcher rebuild app

:tada: That’s it. Your destination server is successfully restored.

48 Likes

Should this also work when Nginx is used as a reverse proxy from the start? I restored my initial app.yml file in the containers folder. Rebuild the app. And then restored a backup. However, when I then do another rebuild of the app, Nginx doesn’t load.

In the production.log I see errors like this:

Can't reach '/uploads/default/original/1X/6e21afbc3c926cdb7bed5741fc91fc259de0f876.jpeg' to get its dimension.
Can't reach '/uploads/default/original/1X/cf0016bd042f57c01fd9752a8b1e344c8f409007.jpeg' to get its dimension.
Can't reach '/uploads/default/original/1X/fb71d0deb50ac4ccd5625b0fd76d6763117bcbc3.jpeg' to get its dimension.

And then ends with Job exception: connect_write timeout reached. The production_error.log is empty. The unicorn_stderr.log mentions errors with Redis:

Failed to report error: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL) 3 Job exception: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)

The unicorn_stdou.log mentions this as well:

Sidekiq PID 2193 done reopening logs...
2021-12-24T07:50:41.007Z pid=2193 tid=21oh ERROR: heartbeat: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
2021-12-24T07:50:41.503Z pid=2193 tid=ea1 WARN: Unable to flush stats: Error connecting to Redis on localhost:6379 (Errno::EADDRNOTAVAIL)
Loading Sidekiq in process id 145

I’ve rebuild the instance a few times already. A default build works (without Nginx proxy), however, I can’t upload my backup file then. It’s too large. I’ll see if I can start from scratch without Let’s Encrypt (hit the weekly limit of 5 cert requests for my domain) and try to increase the upload limits. Hopefully that works. In the meantime, any hints and tips are welcome.

I think I found the issue for Nginx proxy, the socket wasn’t reachable for the full path to that .sock file for Nginx. Fixed now. Re-applying backup.

Edit: Yes, just make sure all paths to the backup are readable by the container and Nginx can read that sock file. Basic troubleshooting… But I guess it was late yesterday.

2 Likes

@techapj should we mention the reverse proxy scenario indicated above, in the first post? Otherwise that information will be lost forever.

1 Like