Discourse AI - NSFW

:bookmark: This topic covers the configuration of the NSFW (Not Safe For Work) feature of the Discourse AI plugin.

:person_raising_hand: Required user level: Administrator

The NSFW modules can automatically classify the NSFW score of every new image upload in posts and chat messages in your Discourse instance. This system helps identify and manage content that may be inappropriate or explicit for a professional or public environment.

You can also enable automatic flagging of content that crosses a threshold.

Settings

  • ai_nsfw_detection_enabled: Enables or disables the module

  • ai_nsfw_inference_service_api_endpoint: URL where the API is running for the module. If you are using CDCK hosting this is automatically handled for you. If you are self-hosting check the self-hosting guide.

  • ai_nsfw_inference_service_api_key: API key for the API configured above. If you are using CDCK hosting this is automatically handled for you. If you are self-hosting check the self-hosting guide.

  • ai_nsfw_models: The model we’ll use for image classification. We offer two different ones with their thresholds:

    • opennsfw2 returns a single score between 0.0 and 1.0. Set the threshold for an upload to be considered NSFW through the ai_nsfw_flag_threshold_general setting.
    • nsfw_detector returns scores for four categories: drawings, hentai, porn, and sexy. Set the threshold for each one through the respective ai_nsfw_flag_threshold_${category} setting. If the general one is lower than any of these, we’ll consider the content NSFW.
  • ai_nsfw_flag_automatically: Automatically flag posts/chat messages that are above the configured thresholds. When this setting is disabled, we only store the classification results in the database.

Additional resources

Last edited by @hugh 2024-08-06T05:37:13Z

Last checked by @hugh 2024-08-06T05:45:45Z

Check documentPerform check on document:
11 Likes