Hiding "toxic" messages using the Google Perspective API?


#1

What if technology could help improve conversations online?

Interesting API to exclude “toxic” messages from the board: Perspective

Can you imagine this in discourse? :thinking:


#2

certainly an interesting concept - I dont know whether it would work for everyone but it makes me wonder having it flag up that what you are posting could be deemed as “toxic” - could that make people change what they are trying to say or would they post it anyway?

Maybe if there was an option to automatically hide or get comments / posts require approval based on the “toxicity” level of the comment would be a potential usecase of it.


(Sam Saffron) #3

I signed up for the API just to check it out, but I doubt this would have any impact in small to medium sized communities.

In my opinion you would need a giant, super active community with limited moderators that participate in “hot” and polarizing topics.

This could possibly be useful to a community like BBS for “auto flagging”, BUT … and there is a giant BUT… you have to send all the content to Google, which, on one hand they can crawl anyway, but on the other hand can be considered a breach of privacy by some users.


(Dan Fabulich) #4

How big is “giant?”

I’d certainly be ok with sending all public posts to Google. As you say, we want Google to crawl them anyway?


(Sam Saffron) #5

I would say, big enough that practically many posts will remain unread by a TL3/Staff for many hours.

For whatever your definition of many is.


(Justin Pierce) #6

Came here to see if this had been posted yet. I think it looks really promising. I requested api access – considering implementing it into the video game I’m making. Would love Discourse integration.


(Sam Saffron) #7

It was quite easy to get access, got an email today telling me I am whitelisted for API access, full details of API are here

https://conversationai.github.io/

They have a cough google group for announcements here: Google Groups

We do not have time to experiment with this right now, but maybe one day when I free up or if anyone in the community wants to pick this up and experiment feel free to.


(Jeff Atwood) #8

They played with testing it using various texts on BBS.boingboing and it looked … bad.


#9

Maybe I’m a bit of a purist when it comes to forums (that really are communities of interest), but I believe that it is the job of admins and moderators to know the climate of their community, the tendencies of their population, and to determine what is right and wrong for their community. In order to do this, admins and mods have to interact with their community. I don’t like the idea of Google making that determination for me. Slippery slope.

However, I do think it might have a role in corporate applications. In my organization, we have over 300+ technical communities of practice, all within STEM disciplines, in an organization with 38k employees around the world. I’m basically the admin/super moderator/janitor for all of these communities (using SharePoint - bleh). Something like this would make my job easier in that application. Disgruntled employees post some strange things before they walk out the door.


(Erlend Sogge Heggen) #10

Continued here:


(Erlend Sogge Heggen) #11