Add in the API key (depending on the model, you might have more fields to input manually) and save
(Optional) Test your connection to make sure itās working
Supported LLMs
You can always add a custom option if you donāt see your model listed. Supported models are continually added.
Grok-2
Deepseek-R1
Nova Pro
Nova Lite
Nova Micro
GPT-4o
GPT-4o mini
OpenAI o1 Preview
OpenAI o1 mini Preview
Claude Sonnet 3.7
Claude Sonnet 3.5
Claude Haiku 3.5
Gemini Pro 1.5
Gemini Flash 1.5
Gemini Flash 2.0
Llama 3.1
Llama 3.3
Mistral large
Pixtral large
Qwen 2.5 Coder
Additionally, hosted customers can use the following pre-configured LLMs in the settings page. These are open-weights LLMs hosted by Discourse, ready for use to power AI features.
CDCK Hosted Large LLM: Llama 3.3
CDCK Hosted Small LLM: Qwen 2.5
CDCK Hosted Vision LLM: Qwen 2-VL
Configurations fields
You will only see the fields relevant to your selected LLM provider. Please double-check any of the pre-populated fields with the appropriate provider, such as Model name
Name to display
Model name
Service hosting the model
URL of the service hosting the model
API Key of the service hosting the model
AWS Bedrock Access key ID
AWS Bedrock Region
Optional OpenAI Organization ID
Tokenizer
Number of tokens for the prompt
Technical FAQ
What is tokenizer?
The tokenizer translates strings into tokens, which is what a model uses to understand the input.
What number should I use forNumber of tokens for the prompt?
A good rule of thumb is 50% of the model context window, which is the sum of how many tokens you send and how many tokens they generate. If the prompt gets too big, the request will fail. That number is used to trim the prompt and prevent that from happening
Caveats
Sometimes you may not see the model you wanted to use listed. While you can add them manually, we will support popular models as they come out.
A lot to unwrap here, which llm are you trying to choose for what?
The CDCK LLMs are only available for very specific features, to see which you need to head to /admin/whats-new on your instance and click āonly show experimental featuresā, you will need to enable them to unlock the CDCK LLM on specific features.
Any LLM you define outside of CDCK LLMs is available to all features.
Is there also a topic that provides a general rundown of the best cost/quality balance? Or even which LLM can be used for free for a small community and basic functionality? I can dive into the details and play around. But Iām a bit short in terms of time.
For example, I only care about spam detection and a profanity filter. I had this for free, but those plugins are deprecated or soon to be. It would be nice if I can retain this functionality without having to pay for an LLM.
Done! It was indeed pretty easy. But maybe for a non techie it may still be a bit hard to setup. For example, the model name was automatically set in the settings, but wasnāt the correct one. Luckily I recognized the model name in a curl example for Claude on the API page and then it worked
Estimated costs are maybe 30 euro cents per month for spam control (I donāt have a huge forum). So thatās manageable! Iāve set a limit of 5 euros in the API console, just in case.
Good to note, yeah sometimes there might be a disconnect. The auto-populated info should act as guidance, which tends to work most of the time, but does fall short in certain cases such as yours (given all the different models and provider configs)