The UK government are planning to change the laws of online safety and I was curious what measures Discourse are taking to address this or if there is a suggested response for anyone who is hosting in the UK?
Our community are looking into this but it’s unclear to me if the onus is on us or Discourse, which in theory would mean every WhatsApp groupchat or Discord server would need to do this.
Thanks in advance for any help (I did check but couldn’t find a recent thread about this)
From Ofcom’s website: “firms are now legally required to start taking action to tackle criminal activity on their platforms”
I think that Discourse already provides a way to tackle criminal activity by the ‘Its Illegal’ flag to alert staff of illegal activity on the site. Realistically, other than putting in measures like the illegal flag option is there much else that can be done?
Yup we’re on top of it. We have been monitoring it for most of the year and are prepared.
It’s on both of us. We provide the tools to comply with the OSA (because we have to comply here on Meta) but you are responsible for how you use them.
The key considerations are:
We have this with the illegal content flag that @ondrej posted above, which should trigger you to use the existing tools to remove the content in a compliant way and do the appropriate reporting.
As above – the illegal flag or other custom flag types are available for you to use. We are making a change so that not logged in users can also flag illegal content. Do we have an ETA for that @tobiaseigen?
You will need to define your internal processes yourself, but the data is all logged so you will be able to report on it when required.
I’m interested too so if anyone can find it out I’d like to know too. Although I can’t find a specific definition it appears that different sized services are treated slightly differently to larger service providers. “we aren’t requiring small services with limited functionality to take the same actions as the largest corporations.” Gov.uk, para. 6
Can the forum software readily provide a list of mod actions over a year? Or perhaps mod actions filtered by, say, responding to flags? I wouldn’t want to have to keep a separate record. (Sometimes I will delete a user because of a flag - that isn’t an option when responding to a flag.)
No and I’m afraid it’s unlikely that we will provide those. As with GDPR we provide the tools to comply but you will need to seek your own legal advice.
There’s a discussion on HN here (concerning a specific case where a person running 300 forums has decided to close them all) which contains useful information and links to official docs.
AFAICT, 700 thousand monthly active UK users is the threshold for medium. 7 million is the threshold for large.
Note that, I think, some aspects of the law are not sensitive to the size of the service, where others are.
I think this is a case of the risk to forum owners being low-probability but high-cost. Individual judgement, and perception of risk and attitude to risk will be in play.
Thanks Hawk. (I see the pdf link resolves to the latest, despite looking like a link to a specific version.)
Here’s a sensible (but not authoritative) description of what the new laws might mean to self-hosted small-scale forums which don’t specifically target children or offer porn. The main point, I think, is to understand the law and document your approach. From there:
Duties
As a small user-to-user service, the OSA requires you to:
By now everyone should have all the information required from OfCom in order to carry out risk assessments. Once you have carried out your risk assessment you will have a list of identified risks that you may want to mitigate for. I am pretty confident that we have all the tools in place, but if anyone is unsure we can have the discussion here.
Do you have a reference for this? We did extensive searching and couldn’t find anything that defined number of users to “size” (which honestly is one of my biggest complaints about the OSA as written.)
I also found that one of the most difficult things to ascertain. I wasn’t sure whether we were responsible for assessing risk on Meta (as administrators of the community), or the risk of using Discourse more generally (the risk for our customers).
If the latter, I didn’t know what size category that would put us in. Turns out it was the former.
What we learned at a seminar is that Ofcom have already reached out to the platforms that they currently believe fall into a category which requires anything more than annual self-assessment and have let them know they will have to formally submit their assessment. If you have not been contacted, I think you can assume that you are required to do your self-assessment, complete any mitigation, and reassess annually or when there are significant changes to scope. You will need to be able to show your assessment work if asked, but you don’t need to submit it anywhere.
But note that I am as new to this as the rest of you so please consider this my opinion, rather than compliance advice. You will need to do your own research.
I’ve just checked the current draft guidance and it defines large but not medium. But it does in several places have notes for services of 700k users, for example:
Hi @HAWK , I wondered if it might be possible to have a simple, single page (or forum post) which documents how Discourse functionality addresses the issues raised?
Currently I am piecemeal pointing to your responses in this thread, other posts about reporting illegal content, admins in chat channels etc as part of our evidence, it’d be a bit more helpful perhaps if there was an ‘official’ page that listed the core functions which the product itself brings to the table so that it could be referenced as evidence? Maybe also helpful for folks reviewing the platform to have confidence that it satisfies the requirements.
Hey Ruth,
I’m happy to help but I need to clarify a something first.
Which issues are you referring to specifically? The ones that you have identified in your risk assessment? If that is the case, feel free to list the things you are trying to mitigate for and I can help you with tooling suggestions.
Everyone will have different risks to mitigate and different levels of tolerance for those risks. As such, I can’t post any kind of definitive list. For example, we don’t allow kids under 13 on Meta so we don’t need to mitigate for a lot of the high risks pertaining to children.
If you are happy to share your assessment, I’m happy to use it as an example.