Today we are announcing good bye to the Discourse AI - Toxicity module in favor of Discourse AI - AI triage, leveraging the power of Large Language Models (LLMs) to provide a superior experience.
Why are we doing this?
Previously using the Toxicity module meant…
- You were stuck using a single pre-defined model
- No customization for your community-specific needs
- Confusing threshold metrics
- Subpar performance
LLMs have come a long way and can now provide a better performing and customizable experience.
Whats new?
Discourse AI - AI triage can be used to triage posts for Toxicity (amongst other ways) and enforce communities to specific code of conducts. This means…
- Multiple LLMs supported for different performance requirements
- Easy to define what and how content should be treated
- Customizable prompts for community-specific needs
- Flag content for review
and much more.
To assist with the transition we have already written out guides
- Setting up toxicity/code of conduct detection in your community
- Setting up spam detection in your community
- Setting up NSFW detection in your community
What happens to Toxicity?
This announcement should be considered very early, until we are ready to decommission you can continue to use Toxicity. When we do it, we will be decommissioning the module and removing all code from the Discourse AI plugin and associated services from our servers.
Update: Toxicity module has now been officially removed from Discourse, this includes all related site settings and features. We are now urging users to transition to using Discourse AI - AI triage and follow the guides listed above.
Business and Enterprise customers will see the following under What's New
in the admin settings on their sites, allowing them to enable Discourse hosted LLMs to power AI triage at no extra cost.