AI Forum Moderation: Seeking Insights and Experiences

I like using forums because we’re all real people with a shared interest or goal. When someone replies with an incorrect answer to a question, another user is bound to show up to provide a correction. I suppose the same could happen if an AI gave an incorrect response, but it’s just not the same. It’s also helpful for our own thinking to read how others approach a problem, I’ve often come to new ways of thinking from reading someone’s well-reasoned response, or learned a new way for doing something I thought I already knew how to do.

Another consideration is the potential for false positives, which can (and do!) turn people away. If I visit a forum as a new user and a machine mistakenly flagged/marked my post or suspended me or whathaveyou, and it’s clear it shouldn’t have happened, I…just won’t go back, most likely, because I’ll either navigate away from the site and forget all about it, or be just annoyed enough not to bother with getting it remedied.

I feel like the impulse to remove human elements from moderation is heading in the wrong direction. Moderation can sometimes have predictable rules - and we have the watched words feature, or matching an IP address, for instance, to handle things like that. But using an algorithm to handle the squishy stuff just ends up with a never ending chase to the perfect algorithm, taking attention away from building a healthier community where the root behaviors can be addressed. At the end of the day, my hope is for users to change the behavior, and I have to believe they are capable of it.

The routine questions are opportunities for people to make human connections, and those touchpoints are absolutely crucial for developing long-term users who will champion your forum. A friendly face that shows up to answer an easy question creates a welcoming atmosphere in a way that AI just can’t. This is low-hanging fruit from a community building standpoint!

2 Likes