I’m aware the Online Safety Act became law in the UK in November 2023. I’m wondering if Discourse are launching any new features to support with this new law and helping making online communities safer for people under the age of 18 to use?
I just came across an upcoming discussion session that may be of interest to Discourse folks:
“Promising Trouble is hosting a lunchtime session with Ali Hall and Andrew Park from Ofcom to discuss the new Online Safety responsibilities for small, low-risk community sites.”
Wednesday, February 12, 2025 – 6:30 AM - 8:00 AM CST
I am working through the compliance for the OSA at the moment
While the majority of the work seems to be in documenting that the organisation has considered and attempted to mitigate the risk of Illegal Harms, I wondered if anyone out there has developed any shareable documentation or automated tooling to help provide an evidence base for a lack of nefarious activity within their forum?
I was thinking possibly:
Data Explorer queries searching for specific keywords or patterns of usage?
Lists of Watched Words that could improve flagging of posts and private messages?
Discourse AI configurations or prompts which might help to flag concerning activity?
Templates for an improved ‘UK OSA Approved’ Terms of Service?
Other ‘sysadmin tips and tricks’ for maximising the effectiveness of the protections, with the minimum of additional work?
I am just as concerned as anyone else would be about the privacy implications of all this, but if audited we would need to be able to show we invested an appropriate amount of effort checking that nothing Illegal was going on, and can present (and review) this evidence annually.
Watched words sounds like it might be a good idea, although of course it’s a blunt instrument.
I may choose to document a process of periodic checking of the use of user-to-user messages. But that works for me because my users barely use the facility.
/admin/reports/user_to_user_private_messages
If there was an uptick in these messages it might indicate elevated risk, and I’d need a response to that. I’m not sure what there is between the extremes of
users will let admins know if messages are problematic
admins will read some (or all) of the messages to check they are not problematic
Personally I don’t want automated analysis which would have false negatives as well as false positives.
I would (will) concentrate on simple statements which position my forum as very low risk on all axes. Proportionate checking is therefore to do nothing, or if something, then something very minimal and easy. Relying on flags is simple and easy.