Can't restore a 20GB backup on a 60GB Digital Ocean instance (fresh Discourse install)

So, I have a ~20GB backup.

When I try to restore it in a fresh Discourse install on a 60GB DigitalOcean droplet, the UI freezes, and it ends up with the following situation:

df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G     0  2.0G   0% /dev
tmpfs           396M   26M  370M   7% /run
/dev/vda1        58G   58G     0 100% /
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/vda15      105M  3.4M  102M   4% /boot/efi
/dev/sda1       246G   25G  209G  11% /mnt/volume-fra1-01-part1
overlay          58G   58G     0 100% /var/lib/docker/overlay2/1b169d3fb316ed26072c13336a9b260add686b548d856f4765560637907b4efd/merged
shm              64M  4.0K   64M   1% /var/lib/docker/containers/37cfaddc5a5466550d7e0471b16b70a5a7b0659dfb7f3e8753dabb0b6d885127/shm
tmpfs           396M     0  396M   0% /run/user/0

The UI stuck at “CREATE INDEX…”.

My guess:
I remember Discourse first copying the backup file to the guest’s /tmp, then unzipping it, which literally means +20GB +20GB = +40GB files

Should not the backup file be unzipped somewhere in the backups folder directly (without copying to /tmp), as the backups folder can be mounted to a bigger drive. Also, what is the idea behind copying the backup file before unzipping it?

Maybe you should try a database only backup, then you can copy all the images over in a different manual step? Since most of the size in the backup is the images.

5 Likes

I would first follow @codinghorror recommendation here and do a restore of database only and copy files by hand.

Additionally, for meta when I restored it sat for about 30 minutes on CREATE INDEX, you are just going to have to wait it out. To speed it up somewhat you could increase: db_shared_buffers per:

https://github.com/discourse/discourse_docker/blob/master/samples/standalone.yml#L29-L31

4 Likes

Did so and it worked. Thanks Jeff and Sam for your input.

Yep, I realise, but it wasn’t the case. The disk got full 100% – I think because of the manipulation with the backup file (copying, then unzipping, so effectively tripling the original backup size).

2 Likes

Alternative solution would have been increase your droplet size temporarily, scaling back when the job is done.