Are you experiencing AI based spam?

Im curious to hear from community members whether they are experiencing any or an uptake in AI powered spam

This would be specifically seeing answers to questions that look like they are ChatGPT based and seem either non-human like or have hallucinations (a common problem with LLMs)

I am experiencing AI based spam

  • Yes
  • No
0 voters

If the answer is yes Im curious to hear…

  • How often this is happening?
  • How much of a problem is this creating within your community?
  • What are you currently doing about it?

If the answer is no Im curious to know…

  • How are you preventing this from happening?
  • Are there reasons as to why your community inherently doesn’t face this issue?

We just use AI as a tool to seek knowledge, maybe a little causal chat.
Perhaps our community is small, and has a common sense that hallucinations are BAD


Private community (login required, invite only).


I reckon the most effective way to stop anykind spamming is being member of very small and difficult language. It stops those clowns who are doing manual labour.

Well, we all know spammers aren’t that smart and automatic traffic doesn’t care of language, genre or even size. So, there must be another reason why some forums or sites are like honey pots for anykind trash and others live without drama.

For the reason why spammers can sign in there and can’t somewhere else when system and setup should be identical I don’t have answer. But one thing is sure: admin’s or other background force’s need to increase fast growth from global audience will lead to bot ans spam problems.


In the last two weeks or so, we have seen a spike on our site. We’re seeing typical spam with hidden links on new replies from new accounts. When we increased the reputation for creating new posts, we saw AI-generated responses increase, and it seemed the bots were trying to slowly increase their reputation on bogus accounts. These responses don’t have obvious bogus links, they just have generic AI text that doesn’t contribute to answering the question.

We got hit over a weekend with a large spike in spam posts, enough that someone created a new topic saying there was too much spam on our forum. Since then, admins need to check the site every day to clean up bogus AI posts. We’re also seeing AI posts on accounts that were created in the past and had no activity, which makes it seem like some spam bots had been seeding accounts for a while and letting them sit with no activity. Now they are trying to slowly get past the engagement limits so they can post new topics.

As noted above, we increased the trust levels for posting new topics. We also enabled akismet. But this hasn’t stopped the AI spam posts. Currently we need an admin/moderator to check the forum every day to review flagged posts and clean up. Some are challenging and look like they might be a person, so two people need to check.

We encouraged our users to help out and flag posts that look like AI and that has helped.

Our forum is fairly low volume and has run for years with very low admin clean-up and maintenance, but it seems the AI bots have found us. I’m thinking AI may be needed to stop AI?


Yeah, sadly. Either that or you temporarily just vet all new users and slow down the time from “when a user signs up” till the post.

We do have:

It also supports flagging, so you could use that today.


On that note we just published a guide on this!


Following up on this, has anyone had a chance to try this out? I would love to get your feedback

1 Like

It haven’t seen a lot of it yet, but my forum holds the first few posts in moderation, and I can usually tell if someone might be a spammer by certain clues. I lock the suspicious ones at TL0 until they post something that is clearly on topic.

It isn’t a “chat about random things” forum, so it’s usually possible to tell whether someone is faking interest by the first post.


Actually, I just stumbled on a user who slipped by and is posting with ChatGPT or other AI. There might be more spam accounts that I’ve missed.

Some ideas on how to fight it:

  • Make a database of VPN providers. This one’s IP address is from “M247 Europe SRL” which is a VPN service provider. I’ve always wanted some kind of notification that a new account is using a VPN. I have to do it manually at the moment.
  • Keep track of read time, days visited, topics/posts read. This user spent 8 minutes reading the site but posted 6 comments, and only visited 3 times on the day of their registration. The user is actually still TL0 naturally, because they haven’t really done anything except post comments.
  • I wrote more ideas in comments on this page.

I wonder if it’s possible to roughly classify users by the ratio of time spent on the site vs. number of words written, plus other signals like VPN, pasted content, injected content, etc. Suspect accounts could be marked for review.

Edit: this quick Data Explorer query turned up a few more, though some of them were already suspended.

FROM users u
LEFT JOIN user_stats us
ON us.user_id =
WHERE u.trust_level < 1
AND u.created_at > '2023-01-01'
AND us.time_read < 1000 -- seconds
AND us.post_count > 1

This is an interesting take to weed out people who might “fake activity” in a single day to upgrade to a higher TL

I like the recommendation here to use additional ways to classify users, something to look into!