Curious about Discourse’s take on Digital Online Safety.
As platform owners, and allowing User Generated Content I do feel that there is an obligation to keep the web safe. Especially when it comes down to removing UGC content by law.
For instance, one example of Digital Online Safety with UGC is Australia’s abhorrent violent material law where content that is abhorrent, inappropriate etc. is required by law to be taken off a platforms.
So I was just curious if this is up to each individual instance of Discourse to implement Digital Online Safety efforts? It just seems to me something that would make the web a better place and a safer place. Just looking to have an open discussion about it.
User Generated Content: images, photos, user-provided biographical information, in-game creations, live-streamed broadcasts, audio, video, GIFs, applications, comments, text, and location-based information
Since Discourse is used around the world and laws are different everywhere, I think that all Discourse(-the-software) can do is make sure that all the underlying technical mechanisms are in place, like:
- making an admin contact email address available
- giving tools to moderators and administrators to moderate, hide, and remove content
- giving tools to users to flag inappropriate or unlawful content
- giving tools to moderators and administrators to ban users and block address ranges
(And Discourse does that very well)
Richard’s response here pretty much covers it all, I can add a bit here which is more or less repeating Richard’s reply using different words.
Discourse can be used in compliant ways with respect to the laws available, but the software itself isn’t compliant or non-compliant. If you host Discourse for users covered by Australis’s law or for instance GDPR, you’ll need to do so in a compliant way.
And as Richard pointed out, we’ve spent quite a bit of time making sure we provide you with all the necessary tools to be able to do it easily (user cancellation, user anonymization, etc).
Thanks @RGJ and @osioke for your replies! Looks like it’s really up to how everyone uses Discourse. Maybe you don’t have Australian users and don’t need to worry or maybe you don’t host in the EU and have to be GDPR compliant.
Discourse is truly great at flagging and providing the tools moderators need, I 100% agree. You two have answered my original question, thank you!
But I do have a follow up regarding:
As a admin or mod I can delete a post or reply but it shows in the thread as “1 hidden reply” to admins and mods still. I was curious because in the case of unlawful content, removing the content from the server is also expected after some analytics are taken (like how many people may have seen it, how long it was up etc.). Curious if there were ways in the UI to remove the hidden content entirely from the server? Or is a programmatic solution using Discourse’s API the way to approach this?