AI powered Spam detection

,

I am happy to announce a new spam detection module in Discourse AI :confetti_ball:

Over the past few months, we have sadly noticed that Akismet’s performance has been quite uneven.

AI-based spam detection allows you to better control how spam scanning works in your community.

We have seen very good results on a few forums now. Please share your experience here.

For those self-hosting or generally looking for a “free” way of using these tools, have a look at OpenRouter. There are quite a few free LLMs out there that can do a reasonable job scanning for spam.

Read all about it at:

25 Likes

Reporting back from results on meta over the weekend:

@Falco amended our “custom instruction” here to:

New users posting about crypto and linking to ANY site without adding important information to the conversation are spammers.

Treat wildly off-topic posts as spam. As you are analyzing posts from what is primarily a support forum for Discourse Forum Software, if a topic isn’t about Discourse and related subject like other forum software, online communities, computer science and software development and doesn’t have a clear link to our main subject it is probably spam. For example, a new topic explaining details about a new car, a video game or any completely unrelated subject to Discourse, like vaccines, geo-politics or beach vacations are spam. Even technical topics, about subjects not related to Discourse are spam.

We have had zero missed spam over the past 2 days, we did catch:


We have seen so much slip through Akismet in the past few months, this new system is proving very effective.

We have disabled Akismet on meta.

16 Likes

I can’t enable this on a hosted Business account because even though the default LLMs included in the hosted version show up as ‘Configured’, they’re not available to select on the ‘Spam’ tab:

4 Likes

So sorry Anurag,

@awesomerobot and team working on improving clarity around the “seeded LLMs”.

We have been rolling out experimental features and “unlocking” the ability to use our internal LLMs when you opt-in for an experimental feature via the “What’s new” feed in admin.

I completely hear you though that this is confusing

  1. It should be clear that - LLM is only available for X,Y,Z features on the LLMs page.
  2. We should invite you to “opt in” to an experiment direct from the LLMs page

If you head to your What’s new feed (as a CDCK hosted customer) you will see:

Once you click that it will “unlock” the Small LLM for use with the anti-spam feature.

7 Likes

At least for me, the “quick enable” feature for spam detection works quite confusingly. After a reload, it always appears disabled, regardless of whether the feature is actually enabled or disabled. I need to check the staff action logs to find out. The toggle always says “feature enabled” after a reload, even when it was already enabled and will actually be disabled.

9 Likes

Thanks for reporting! We will have a look at this and report back.

3 Likes

@Moin we’ve identified the issue, it was related to site setting naming. We have fixed it and are now deploying it for all sites soon

For now, If you go to What's New and click Check for updates and then go through the process, it should be working properly

7 Likes

I must say, this spam detection is quite efficient! I tested it on my forum with a test user and it hid the post almost immediately.
Thanks, Discourse Team!

6 Likes

I also ran into this problem and couldn’t find a solution by searching for the “No LLMs available” key phrase, since it was previously only present in a screenshot, not in text form.

So I’m adding this reply to allow others who suffer from the same confusion to more easily find the solution provided here:

5 Likes

Trying to add custom instructions on our hosted site, which test fine, saving the changes gives an error “An error occurred: You can’t use this model with this feature”

1 Like

Apologies, I just queued a deploy which will fix it

Can you try again in 15 minutes?

3 Likes

Thanks, issue is resolved. Much appreciated

3 Likes