Online Safety Act (New OfCom Rules)

After chatting with @RCheesley a bit more about this, I’ve realised that another helpful thing I could do is share a list of mitigation tools/strategies, without associating them with specific risks.

I am not suggesting that all communities should use all these things, or that there aren’t other things they should use. This is not intended as a definitive list. If anyone feels they have an unacceptable level of unmitigated risk, feel free to post here and we can discuss options.

So here is the list, in no particular order:

  • Terms of service which are:
    • comprehensive
    • relevant
    • up to date
    • easy to find and access
  • Terms of service which may:
    • specifically exclude the furnishing or soliciting of data from children
    • publicise the visibility to admins of all content (including PMs and chat)
  • Internal whistle blowing policies
  • Robust moderation practices undertaken by trained personal and may include:
    • the swift take down of illegal content
    • forcing manual registration approval
    • mandating rich profiles with enforced fields, including real names
    • removing accounts of proscribed organisations
    • proactively moderating user profiles
    • disabling user to user messaging and chat
  • The flagging option for illegal content made available to anonymous users
  • Trust levels
  • Watched words
  • AI triaging
  • Secure uploads
  • Slow mode
  • Topic timers may be useful in some instances
3 Likes