Has anyone thought of integrating more opinionated language checking into Discourse, to (hopefully) catch a lot of offensive, hurtful discourse before it starts? For example, something that would work similar to
Seems like this could really go a long way in reminding folks who inadvertently post stuff that hurts others. Thoughts?
I know that
@erlend_sh was talking with PerspectiveAPI who I believe do stuff along these lines. Not sure where that discussion ended up though.
Yes, you should take a look at this:
As suggested by
@erlend_sh, I have created a development log for a plugin I’ve begun developing for Discourse.
I have started work on building what I call the pre-emptive striker plugin (if anyone has a better suggestion for the name, throw it at me!). It was proposed at the beginning of the summer and I’m now officially developing the plugin. Link to original proposal is here.
Basically, the plugin checks on the user as they are writing for toxicity using Google’s perspective API. a…
@deevolution is still working on it.
Alex looks interesting. The plugin we’re making with PerspectiveAPI could perhaps eventually ge generalized to work with othe APIs. It will at least provide a handy example implementation.