Disk usage spike during backup, Discourse crashed hard :-(

Hmm. I have been transitioning files off of S3 and back to my local server, but the process to do that seems to run on the fly, and only a few hundred images (all around ~300k each) at a time = ~0.1 GB per batch. Over the last week, I might have run the script 20 times, so 20 batches = around 2 GB total of disk space. Which I had plenty of room for.

Is there any chance that even though the script appears to move them on the fly (downloading them from S3 and appearing to upload them immediately to Digital Ocean), there could also be some kind of lag for a queued job that would have kicked in at 530am, related to moving those images?

(Also: I was running these batches manually until 9pm, so as far as I know, the server was just doing normal operations from 9pm until 5:30am when it went down.)

Here’s my 7-day disk usage. It was climbing steadily from the images being imported, but you can see where it slammed up to 100% at 5:30am:

Are there any log files that might have some clues about what happened at 5:35am, besides the log files I see in the ‘Logs’ tab?

1 Like