Online Safety Act (New OfCom Rules)

After chatting with @RCheesley a bit more about this, I’ve realised that another helpful thing I could do is share a list of mitigation tools/strategies, without associating them with specific risks.

I am not suggesting that all communities should use all these things, or that there aren’t other things they should use. This is not intended as a definitive list. If anyone feels they have an unacceptable level of unmitigated risk, feel free to post here and we can discuss options.

So here is the list, in no particular order:

  • Terms of service which are:
    • comprehensive
    • relevant
    • up to date
    • easy to find and access
  • Terms of service which may:
    • specifically exclude the furnishing or soliciting of data from children
    • publicise the visibility to admins of all content (including PMs and chat)
  • Internal whistle blowing policies
  • Robust moderation practices undertaken by trained personal and may include:
    • the swift take down of illegal content
    • forcing manual registration approval
    • mandating rich profiles with enforced fields, including real names
    • removing accounts of proscribed organisations
    • proactively moderating user profiles
    • disabling user to user messaging and chat
  • The flagging option for illegal content made available to anonymous users
  • Trust levels
  • Watched words
  • AI triaging
  • Secure uploads
  • Slow mode
  • Topic timers may be useful in some instances
5 Likes

This seems like an area where Discourse could offer more feature support.

By default, Discourse permits private messaging at Trust Level 1, which users can reach automatically, without moderator intervention, by reading 30 posts in 5 topics over 10 minutes.

Most forum moderators only read private messages if someone in the thread flags a post. So, if two people don’t object to communicating over PM (e.g. consensually exchanging porn, or successfully grooming a minor), moderators won’t be flagged, and the thread won’t be reviewed.

It would be a good first step to be able to limit access to private messages to users who affirm that they’re over the age of 18. (This could be a simple checkbox, or it could include an age-gating mechanism compliant with the Online Safety Act.)

Right now it seems like our only available tool is to allow members of a specific group to send PMs, which means users would have to navigate to the group’s page and click “Request” to request access to the group. (And they’d need to find that button, a tall order.) And then a moderator would have to manually add the user to the group.

Is there anything better we can do?

1 Like

You can create a custom user field and use an automation that adds users who check that checkbox to a group automatically.

And then you allow that group to initiate personal messages.

But this won’t prevent users who were messaged from receiving and replying to personal messages.

1 Like

I don’t think I knew that. In general, can TL0 users reply to private messages they receive?

You can used watched words to mitigate this risk.

1 Like