I’m looking for ways to integrate AI into my Discourse forum to assist with moderation. I don’t need it to replace my moderators, I just need help catching things humans usually can’t see. Sometimes, it’s because these issues are literally invisible to a moderator (like a spammer who is creating multiple accounts from the same IP address). Other times, it is visible to a moderator, but it’s easy to get lazy and miss these things (like a topic that is posted in the wrong category, or a topic that is veering off course).
There are endless tasks an AI moderator could help with. Just a few ideas off the top of my head:
Monitoring all new posts to indicate the likelihood of whether they’re spammer or legitimate users.
Monitoring new users and their activity until they’ve reached a certain trust level.
Catching problem users making new accounts after being suspended.
Identifying topics that have been posted in the wrong category, and offering suggestions for which category they should be moved to.
Flagging and immediately removing NSFW content.
Identifying when the conversation in a topic is veering off-course or should be locked.
Identifying when a topic has already been covered and should be redirected.
Identifying when a user has created multiple accounts (multiple users logging in from the same IP address).
Identifying when a user is making a self-promotional or irrelevant post.
Not to mention (and this would be going in a slightly different direction), there are times when AI could even respond to certain topics with a clearly marked AI profile. For example, if someone posts a question about how to use the forum or where to find a certain feature (like how to update their profile), the bot could respond by identifying when it’s a question it could easily answer, and then it could jump in and explain how to do it.
I’m barely even scratching the surface here, but the underlying question is: Has anyone created an AI bot that can assist with these types of moderation tasks in Discourse?
If not, what’s holding this kind of innovation back? This seems like it would be insanely useful for forum admins, not to replace humans (although that may be possible in some cases), but to help humans do the job a lot better.
I don’t know but I would guess: AI is very unrealiable, and can be very fast very expensive.
Some of those options are already possible, though. And not in common use mainly because, well, AI is unrealiable and needs a human watching it.
Answering machine that is following every posts and jump in when triggered by content must be expensive in matter of hardware and pure money. But a model that answer on categorylevel to every topic starters is already possible.
Then there is something like watching IP is quite easy cover without AI but is really problematic. Having same IP is quite common.
Have you checked out Discourse AI and Discourse Chatbot?
I did just hear about this yesterday from @Jagster (thanks, Jakke, for pointing that out). I’ve been looking into this a bit more, the advanced version in particular… and if I’m reading it right, it looks like it’ll be somewhat expensive to implement this, either by having an Enterprise hosting account or a pretty beefy self-hosted server.
Either way, it’s good to know this already exists as an option. It looks like this checks some important boxes, but I can think of many more ways it can be utilized. I’m excited to see how this continues to develop in the months and years ahead. There’s a ton of potential for this kind of thing!
There are a lot of potential positive things for this, also a lot of risks and drawbacks.
Stack Exchange has an A.I. bot that reviews answers and will mention this to the author if their answer seems unclear:
“As it’s currently written your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. -Community bot”
This kind of a prompt can be really helpful to inspire clearer explanations and avoid people becoming confused, frustrated, or annoyed with unclear answers.
It’s not resource intensive on your self-hosted instance because you can just use hosted models, e.g., openai. So you just pay for API calls for embeddings and chat.
I built a custom integration for a client using Discourse.
Collected past moderation activity and trained an NLP to flag topics and comments that needed attention.
Added a toxicity moderator also trained from their past moderation activity.
Added a sentiment integration to help quickly resolve comments.
Training is done on Google Colab and model loaded on GCP to server APIs from the discourse webhooks.