I’m planning to launch a Discourse community, and maybe it’s a stupid question, but after reading your post, I wonder if there is any protection in this platform against AI-generated spam or some coordinated abuse ?
I use this on my public forum and it works great - fight ai spam with ai spam detection. literally none make it to live posts.
You might be interested in Documentation about spam
https://meta.discourse.org/tags/c/documentation/10/spam
So this is a spam protection per post I guess..
Ok and is there anything else to protect the forum?
Not following, did you read the documentation? There are many mitigations in place, some using AI some not.
Perhaps brainstorm a bit with ask.discourse.com , it is aware of all the little edge cases here.
Sure, on it. Thanks!
It sounds like the AI spam detection handles obvious per-post spam very effectively.
I’m curious how teams handle situations where no individual post breaks rules, but activity across multiple accounts or over time still feels coordinated or “off.”
Are there built-in workflows for that?
That’s where community moderation features come in. If your members notice something is off with a given post, they can flag it. The “something else” reason is handy as a way to inform moderators about a noticed trend.
Makes sense,thanks for clarifying.
Out of curiosity, how does this typically scale in larger or very active communities? Does the flagging system usually surface trends clearly enough, or do moderators sometimes need to correlate activity across posts and accounts manually?