I think that you could just rm -r tombstone and the directories will get created as needed when more tombstoned files get moved over. A slightly more paranoid solution would be something like
find tombstone -type f -exec rm \{\} \;
Still more paranoid would be to first rsync the tombstone stuff elsewhere for a while (just in case something got tomestoned by mistake).
The most robust (but scary nonetheless) solution would be to move images to S3 (AWS, or Digital Ocean). It looks like it’d be $5/month for 250GB on Digital Ocean. It’s probably a good long term solution and will reduce load on your server.
Thanks Jay! Interesting, I always believed the tombstone would be cleaned up automatically.
And yep, I’m one of the paranoid types and have everything, included my tombstone backed up - twice
I have considered S3 or DO storage (S3 is quite expensive by the way because they charge for traffic - a terabyte of traffic/month adds up), but as you say, I’ve always beens scared of that route. I don’t fully understand those platforms and the migration would probably keep me awake at night…
I’m hosting at Hetzner and they just plugged in an additional SSD drive for me this morning, so I have plenty of space again (kudos for the Hetzner team by the way - everything went very smoothly).
The tombstone files are cleaned up automatically, but your queue is full of the rebake processes, so it’s not getting to the cleanup tasks. (I’m not sure, but the cleanup tasks may be in the low priority queue, so they may never get called until all of the rebaked are finished.)
I think you forgot to trigger the Jobs::PurgeDeletedUploads job. CleanUpUploads job will move the unwanted uploads to tombstone directory. After the grace period PurgeDeletedUploads job will delete the tombstone files.
Just need to add that you aren’t supposed to serve anything from S3 directly. You need to put a CDN front of the files there. On Digital Ocean you get a CDN for spaces included too.
Job 例外: PG::ForeignKeyViolation: ERROR: update or delete on table "uploads" violates foreign key constraint "fk_rails_1d362f2e97" on table "user_profiles"
DETAIL: Key (id)=(169100) is still referenced from table "user_profiles".
def purge_tombstone(grace_period)
if Dir.exists?(Discourse.store.tombstone_dir)
Discourse::Utils.execute_command(
'find', tombstone_dir, '-mtime', "+#{grace_period}", '-type', 'f', '-delete'
)
end
end