דיסקורס AI - דף הגדרות של מודל שפה גדול (LLM)

:bookmark: This guide covers the LLM settings page which is part of the Discourse AI plugin.

:person_raising_hand: Required user level: Administrator

The dedicated settings page is designed to have everything related to Large Language Models (LLMs) used for Discourse AI features in one place.

:raised_hand_with_fingers_splayed: Depending on the Discourse AI feature enabled, an LLM might be needed. Please check each Discourse AI feature to know if an LLM is a pre-requisite.


Features

  • Add new models, with prepopulated information
  • Add custom models not mentioned
  • Configure LLM settings
  • Allow specific LLM use for AI Bot
    • See the AI Bot username
  • Enable vision support (model dependent)
  • Test
  • Save settings

Adding LLM connections

  1. Go to AdminPluginsAI
  2. Go to the LLMs tab
  3. Add a new connection, pick your model
  4. Add in the API key (depending on the model, you might have more fields to input manually) and save
  5. (Optional) Test your connection to make sure it’s working

Supported LLMs

:person_tipping_hand: You can always add a custom option if you don’t see your model listed. Supported models are continually added.

  • Grok-2
  • Deepseek-R1
  • Nova Pro
  • Nova Lite
  • Nova Micro
  • o3-pro
  • o3
  • o3-mini
  • GPT-4.1 (including: nano,mini)
  • GPT-4o
  • GPT-4o mini
  • OpenAI o1 Preview
  • OpenAI o1 mini Preview
  • Claude Sonnet 3.7
  • Claude Sonnet 3.5
  • Claude Haiku 3.5
  • Gemini Pro 1.5
  • Gemini Flash 1.5
  • Gemini Flash 2.0
  • Llama 3.1
  • Llama 3.3
  • Mistral large
  • Pixtral large
  • Qwen 2.5 Coder

Additionally, hosted customers can use the CDCK Hosted Small LLM (Qwen 2.5) pre-configured in the settings page. This is an open-weights LLM hosted by Discourse, ready for use to power AI features.

Configurations fields

:information_source: You will only see the fields relevant to your selected LLM provider. Please double-check any of the pre-populated fields with the appropriate provider, such as Model name

  • Name to display
  • Model name
  • Service hosting the model
  • URL of the service hosting the model
  • API Key of the service hosting the model
  • AWS Bedrock Access key ID
  • AWS Bedrock Region
  • Optional OpenAI Organization ID
  • Tokenizer
  • Number of tokens for the prompt
  • Enable responses API (Open AI only, be sure to set URL to https://api.openai.com/v1/responses)

Technical FAQ

What is tokenizer?

  • The tokenizer translates strings into tokens, which is what a model uses to understand the input.

What number should I use for Number of tokens for the prompt ?

  • A good rule of thumb is 50% of the model context window, which is the sum of how many tokens you send and how many tokens they generate. If the prompt gets too big, the request will fail. That number is used to trim the prompt and prevent that from happening

Caveats

  • Sometimes you may not see the model you wanted to use listed. While you can add them manually, we will support popular models as they come out.

Last edited by @pedro 2025-10-25T04:02:25Z

Check documentPerform check on document:
9 לייקים

It’s too difficult, I don’t know how to do it at all. I hope to update specific tutorials on various AIs, such as Google login settings.

לייק 1

We improved the UI a lot in the past week, can you try it out again?

3 לייקים

When Gemini 2.0 will be supported ?

נתמך כבר זמן מה.

4 לייקים

נראה שיש לי בעיה שבה אני לא יכול לבחור LLM למרות שיש לי את ה-CKDC מא hosted מאורגן..

האם זה נורמלי?

2 לייקים

A lot to unwrap here, which llm are you trying to choose for what?

The CDCK LLMs are only available for very specific features, to see which you need to head to /admin/whats-new on your instance and click “only show experimental features”, you will need to enable them to unlock the CDCK LLM on specific features.

Any LLM you define outside of CDCK LLMs is available to all features.

4 לייקים

Is there also a topic that provides a general rundown of the best cost/quality balance? Or even which LLM can be used for free for a small community and basic functionality? I can dive into the details and play around. But I’m a bit short in terms of time.

For example, I only care about spam detection and a profanity filter. I had this for free, but those plugins are deprecated or soon to be. It would be nice if I can retain this functionality without having to pay for an LLM.

3 לייקים

We do have this topic, that might be what you are looking for.

2 לייקים

Done! It was indeed pretty easy. But maybe for a non techie it may still be a bit hard to setup. For example, the model name was automatically set in the settings, but wasn’t the correct one. Luckily I recognized the model name in a curl example for Claude on the API page and then it worked :tada:

Estimated costs are maybe 30 euro cents per month for spam control (I don’t have a huge forum). So that’s manageable! I’ve set a limit of 5 euros in the API console, just in case.

לייק 1

Which one did you pick for Claude? What was the incorrect name shown, and what did you correct it to?

לייק 1

אני משתמש ב-Claude 3.5, מזהה המודל כברירת מחדל הוא claude-3-5-haiku, אבל הייתי צריך לשנות אותו ל-claude-3-5-haiku-20241022, אחרת קיבלתי שגיאה.

לייק 1

Good to note, yeah sometimes there might be a disconnect. The auto-populated info should act as guidance, which tends to work most of the time, but does fall short in certain cases such as yours (given all the different models and provider configs)

I have updated the OP of this guide

לייק 1

This model is not listed on 3.4.2 - are those pre-configs only available on 3.5 and I have to add them manually?

Edit: Also what option do I choose for “Tokenizer” when using Grok 3 models?

Pre-configs are simply templates, you can get the same end result by using the “Manual configuration”.

I’ve found that the Gemini tokenizer is pretty close the the Grok one, so try that.

2 לייקים

האם יש דרך להשתמש ב-IBM WatsonX דרך ניהול התצורה הנוכחי, או שזה ידרוש עבודת פיתוח נוספת מצוות ה-Discourse?

האם IBM WatsonX חושף API תואם OpenAI במקרה?

Great question. A quick poke around the docs didn’t tell me much, but the fact that this repository exists suggests that it is not directly compatible: GitHub - aseelert/watsonx-openai-api: Watsonx Openai compatible API

אילו מהמודלים הללו של שפה גדולים (LLMs) ניתנים לשימוש בחינם למניעת דואר זבל?

עריכה: לא משנה, אני משתמש ב-Gemini Flash 2.5

I always wonder too. This seems like the best answer to that question.

But also, there is this in the OP from the Spam config topic. I think it’s just a little hard to find in all of the information that’s there.

לייק 1