Tombstone directory not getting cleaned up for images

I’m not sure I understand correctly how the tombstone cleanup process works. It’s somewhat urgent as my disk is filling up rapidly as we’re reprocessing all our images. We have a lot of them, so this generates many tombstone files.

Here’s my setup:

  • clean up uploads is enabled
  • clean orphan uploads grace period hours is set to 48 hours
  • purge deleted uploads grace period days temporarily set to 2 days (I started reprocessing 4 days ago)

I triggered the Jobs::CleanUpUploads job in the Sidekiq scheduler, but nothing happens. What am I missing here?

2 Likes

I think that you could just rm -r tombstone and the directories will get created as needed when more tombstoned files get moved over. A slightly more paranoid solution would be something like

find tombstone -type f -exec rm \{\} \;

Still more paranoid would be to first rsync the tombstone stuff elsewhere for a while (just in case something got tomestoned by mistake).

The most robust (but scary nonetheless) solution would be to move images to S3 (AWS, or Digital Ocean). It looks like it’d be $5/month for 250GB on Digital Ocean. It’s probably a good long term solution and will reduce load on your server.

3 Likes

Thanks Jay! Interesting, I always believed the tombstone would be cleaned up automatically.

And yep, I’m one of the paranoid types and have everything, included my tombstone backed up - twice :slight_smile:

I have considered S3 or DO storage (S3 is quite expensive by the way because they charge for traffic - a terabyte of traffic/month adds up), but as you say, I’ve always beens scared of that route. I don’t fully understand those platforms and the migration would probably keep me awake at night…

I’m hosting at Hetzner and they just plugged in an additional SSD drive for me this morning, so I have plenty of space again (kudos for the Hetzner team by the way - everything went very smoothly).

5 Likes

The tombstone files are cleaned up automatically, but your queue is full of the rebake processes, so it’s not getting to the cleanup tasks. (I’m not sure, but the cleanup tasks may be in the low priority queue, so they may never get called until all of the rebaked are finished.)

Getting more disk is the easier solution!

Oh? My queue is pretty empty, so I don’t think that’s it.

N. B. I have no idea.

Do you see the cleanup task in the queue?

I think I looked for a cleanup rake task and didn’t see one.

@vinothkannans I am somewhat concerned here, can you spot check that tombstone is working … as designed in our non S3 setups?

5 Likes

This is working fine on my local dev environment.

I think you forgot to trigger the Jobs::PurgeDeletedUploads job. CleanUpUploads job will move the unwanted uploads to tombstone directory. After the grace period PurgeDeletedUploads job will delete the tombstone files.

7 Likes

Brilliant, thanks Vinoth!

2 Likes

Just need to add that you aren’t supposed to serve anything from S3 directly. You need to put a CDN front of the files there. On Digital Ocean you get a CDN for spaces included too.

4 Likes

Ah! Thanks for that. (and so obvious now that I hear it!) Keycdn is pretty affordable.

2 Likes

You can even use Cloudflare, since it’s a different domain only for uploads you will not find the javascript problems, and get a good CDN for free.

5 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.

In /logs I’ve got following error because of which Jobs::CleanUpUploads fails.

Job exception: PG::ForeignKeyViolation: ERROR:  update or delete on table "uploads" violates foreign key constraint "fk_rails_1d362f2e97" on table "user_profiles"
DETAIL:  Key (id)=(169100) is still referenced from table "user_profiles".

So it looks like there is probably some upload that was deleted from the disk but still exists in the database. I’m 100% sure that I haven’t made any manual changes in the database so that must be some byproduct of version migrations…

May I ask if I should fix this manually from console or can you recommend me some script in the package?

Backtrace
rack-mini-profiler-1.1.6/lib/patches/db/pg.rb:69:in `exec_params'
rack-mini-profiler-1.1.6/lib/patches/db/pg.rb:69:in `exec_params'
activerecord-6.0.1/lib/active_record/connection_adapters/postgresql_adapter.rb:672:in `block (2 levels) in exec_no_cache'
activesupport-6.0.1/lib/active_support/dependencies/interlock.rb:48:in `block in permit_concurrent_loads'
activesupport-6.0.1/lib/active_support/concurrency/share_lock.rb:187:in `yield_shares'
activesupport-6.0.1/lib/active_support/dependencies/interlock.rb:47:in `permit_concurrent_loads'
activerecord-6.0.1/lib/active_record/connection_adapters/postgresql_adapter.rb:671:in `block in exec_no_cache'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract_adapter.rb:718:in `block (2 levels) in log'
/usr/local/lib/ruby/2.6.0/monitor.rb:235:in `mon_synchronize'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract_adapter.rb:717:in `block in log'
activesupport-6.0.1/lib/active_support/notifications/instrumenter.rb:24:in `instrument'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract_adapter.rb:708:in `log'
activerecord-6.0.1/lib/active_record/connection_adapters/postgresql_adapter.rb:670:in `exec_no_cache'
activerecord-6.0.1/lib/active_record/connection_adapters/postgresql_adapter.rb:651:in `execute_and_clear'
activerecord-6.0.1/lib/active_record/connection_adapters/postgresql/database_statements.rb:111:in `exec_delete'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/database_statements.rb:180:in `delete'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/query_cache.rb:22:in `delete'
activerecord-6.0.1/lib/active_record/persistence.rb:395:in `_delete_record'
activerecord-6.0.1/lib/active_record/persistence.rb:883:in `_delete_row'
activerecord-6.0.1/lib/active_record/persistence.rb:879:in `destroy_row'
activerecord-6.0.1/lib/active_record/counter_cache.rb:173:in `destroy_row'
activerecord-6.0.1/lib/active_record/locking/optimistic.rb:108:in `destroy_row'
activerecord-6.0.1/lib/active_record/persistence.rb:535:in `destroy'
activerecord-6.0.1/lib/active_record/callbacks.rb:309:in `block in destroy'
activesupport-6.0.1/lib/active_support/callbacks.rb:135:in `run_callbacks'
activesupport-6.0.1/lib/active_support/callbacks.rb:827:in `_run_destroy_callbacks'
activerecord-6.0.1/lib/active_record/callbacks.rb:309:in `destroy'
activerecord-6.0.1/lib/active_record/transactions.rb:311:in `block in destroy'
activerecord-6.0.1/lib/active_record/transactions.rb:375:in `block in with_transaction_returning_status'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/database_statements.rb:279:in `transaction'
activerecord-6.0.1/lib/active_record/transactions.rb:212:in `transaction'
activerecord-6.0.1/lib/active_record/transactions.rb:366:in `with_transaction_returning_status'
activerecord-6.0.1/lib/active_record/transactions.rb:311:in `destroy'
/var/www/discourse/app/models/upload.rb:112:in `block in destroy'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/database_statements.rb:281:in `block in transaction'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction'
/usr/local/lib/ruby/2.6.0/monitor.rb:235:in `mon_synchronize'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction'
activerecord-6.0.1/lib/active_record/connection_adapters/abstract/database_statements.rb:281:in `transaction'
activerecord-6.0.1/lib/active_record/transactions.rb:212:in `transaction'
/var/www/discourse/app/models/upload.rb:110:in `destroy'
activerecord-6.0.1/lib/active_record/persistence.rb:551:in `destroy!'
activerecord-6.0.1/lib/active_record/relation/batches.rb:70:in `block (2 levels) in find_each'
activerecord-6.0.1/lib/active_record/relation/batches.rb:70:in `each'
activerecord-6.0.1/lib/active_record/relation/batches.rb:70:in `block in find_each'
activerecord-6.0.1/lib/active_record/relation/batches.rb:136:in `block in find_in_batches'
activerecord-6.0.1/lib/active_record/relation/batches.rb:238:in `block in in_batches'
activerecord-6.0.1/lib/active_record/relation/batches.rb:222:in `loop'
activerecord-6.0.1/lib/active_record/relation/batches.rb:222:in `in_batches'
activerecord-6.0.1/lib/active_record/relation/batches.rb:135:in `find_in_batches'
activerecord-6.0.1/lib/active_record/relation/batches.rb:69:in `find_each'
/var/www/discourse/app/jobs/scheduled/clean_up_uploads.rb:16:in `execute'
/var/www/discourse/app/jobs/base.rb:232:in `block (2 levels) in perform'
rails_multisite-2.0.7/lib/rails_multisite/connection_management.rb:63:in `with_connection'
/var/www/discourse/app/jobs/base.rb:221:in `block in perform'
/var/www/discourse/app/jobs/base.rb:217:in `each'
/var/www/discourse/app/jobs/base.rb:217:in `perform'
/var/www/discourse/app/jobs/base.rb:279:in `perform'
mini_scheduler-0.12.2/lib/mini_scheduler/manager.rb:86:in `process_queue'
mini_scheduler-0.12.2/lib/mini_scheduler/manager.rb:36:in `block (2 levels) in initialize'

Thank you very much for advise!

I can confirm Tombstone is not getting purged by PurgeDeletedUploads on our S3 either.

https://meta.discourse.org/t/how-to-reverse-engineer-the-discourse-api/20576/24?u=terrapop

Seems this is only cleaning up local, I can’t find any references to S3:

def purge_tombstone(grace_period)
  if Dir.exists?(Discourse.store.tombstone_dir)
    Discourse::Utils.execute_command(
      'find', tombstone_dir, '-mtime', "+#{grace_period}", '-type', 'f', '-delete'
    )
  end
end

Does this mean Tombstone will grow on S3 (or any other external upload directory) into infinity?

No, I checked S3 recently and the expected number of backups were there (5).

1 Like

I’m not talking about backups, this is about images that certainly will NOT be deleted from Tombstone on S3. Also, this whole topic is about images not the backups.

It would be nice, if someone from @team could confirm that images on Tombstone are not deleted in the way they should, thus we could find a away around this and delete them ourselves periodically if that is indeed the case.