In light of the SESTA/FOSTA laws, which effectively remove a lot of protections for social/forum/ugc webmasters (Section 230 safe harbor). Making webmasters liable for what their users do.
It might be wise to support using an image recognition API as one solution for improving protection. In order to automate blocking uploads of explicit content like (unsafe, nudity, gore etc.).
Also improve protection from exploits like uploading inappropriate images in drafts, then hotlinking to those images elsewhere. Using it like free anonymous image hosting. I’m not sure how exploitable this is with Discourse, but it looks like with default settings, it could be exploitable for 180 days after draft creation without the webmaster knowing what has been uploaded (delete drafts older than n days).