New OfCom Rules

Hello

The UK government are planning to change the laws of online safety and I was curious what measures Discourse are taking to address this or if there is a suggested response for anyone who is hosting in the UK?

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/time-for-tech-firms-to-act-uk-online-safety-regulation-comes-into-force/

Our community are looking into this but it’s unclear to me if the onus is on us or Discourse, which in theory would mean every WhatsApp groupchat or Discord server would need to do this.

Thanks in advance for any help (I did check but couldn’t find a recent thread about this)

1 Like

From Ofcom’s website: “firms are now legally required to start taking action to tackle criminal activity on their platforms”

I think that Discourse already provides a way to tackle criminal activity by the ‘Its Illegal’ flag to alert staff of illegal activity on the site. Realistically, other than putting in measures like the illegal flag option is there much else that can be done?

6 Likes

Yup we’re on top of it. We have been monitoring it for most of the year and are prepared.

It’s on both of us. We provide the tools to comply with the OSA (because we have to comply here on Meta) but you are responsible for how you use them.

The key considerations are:

We have this with the illegal content flag that @ondrej posted above, which should trigger you to use the existing tools to remove the content in a compliant way and do the appropriate reporting.

As above – the illegal flag or other custom flag types are available for you to use. We are making a change so that not logged in users can also flag illegal content. Do we have an ETA for that @tobiaseigen?

You will need to define your internal processes yourself, but the data is all logged so you will be able to report on it when required.

See above.

This is on you to create.

Also on you to organise.

Also on you.

And… also on you.

10 Likes

For all of the things that are on us, do you have suggested processes and copy?

Out of curiosity: how is UK defining small, medium and large service providers?

1 Like

I’m interested too so if anyone can find it out I’d like to know too. Although I can’t find a specific definition it appears that different sized services are treated slightly differently to larger service providers. “we aren’t requiring small services with limited functionality to take the same actions as the largest corporations.” Gov.uk, para. 6

1 Like

Can the forum software readily provide a list of mod actions over a year? Or perhaps mod actions filtered by, say, responding to flags? I wouldn’t want to have to keep a separate record. (Sometimes I will delete a user because of a flag - that isn’t an option when responding to a flag.)

1 Like

No and I’m afraid it’s unlikely that we will provide those. As with GDPR we provide the tools to comply but you will need to seek your own legal advice.

2 Likes

There’s a discussion on HN here (concerning a specific case where a person running 300 forums has decided to close them all) which contains useful information and links to official docs.

AFAICT, 700 thousand monthly active UK users is the threshold for medium. 7 million is the threshold for large.

Note that, I think, some aspects of the law are not sensitive to the size of the service, where others are.

For more, see
(Draft) Illegal content Codes of Practice for user-to-user services (ofcom link)

I think this is a case of the risk to forum owners being low-probability but high-cost. Individual judgement, and perception of risk and attitude to risk will be in play.

1 Like

Thanks. So, it apply only to a few, in its full power anyway.

One of my friend living in UK is partly reason why I’m curious, because she in panicking quite a lot because of this.

Here are the risk assessment guidelines: https://www.ofcom.org.uk/siteassets/resources/documents/online-safety/information-for-industry/illegal-harms/risk-assessment-guidance-and-risk-profiles.pdf?v=387549

1 Like