What’s next for NSFW detection in Discourse AI

If you’ve seen our previous announcement, then you are aware that we are shifting to prefer LLM preferred solutions as they provide a vastly superior experience. Hence, coming soon we’ll be decommissioning the NSFW module in favor of Discourse AI Post Classifier - Automation rules

:information_source: This will be a beta experience so expect changes to occur to the feature.

Why are we doing this?

For the same reasons…

Whats new?

We will add another guide specifically for NSFW detection to help with the transition.

What happens to NSFW?

This announcement should be considered very early, until we are ready to decommission you can continue to use NSFW. When we do it, we will be decommissioning the module, and removing all code from the Discourse AI plugin and associated services from our servers. The deprecation will only happen after AI Triage is ready to handle images which will be done soon.

9 Likes

FYI about the following change

2 Likes

This part is now complete, configure and use a vision-powered LLM via AI triage for Image detection

As a reminder, we have NOT yet deprecated the NSFW module. This will still take a bit of time

A new guide is available here

3 Likes

As a heads-up, we are now hiding site settings for enabling/disabling Toxicity and NSFW. This is our ongoing effort as we continue to deprecate the features.

If you have these features turned on, it will still operate as normal. We have NOT fully deprecated the features yet.

If you have it turned off and would like to turn it on, you will now be unable to do so.

1 Like