Configuring OpenRouter language models

:bookmark: This guide explains how to configure API keys for OpenRouter to enable Discourse AI features that require 3rd party LLM keys.

:person_raising_hand: Required user level: Administrator

In this guide, we’ll configure OpenRouter - a service that provides access to multiple large language models through a unified API.

Note: You will require a plan and configured API key from OpenRouter

Obtain API keys

  1. Visit OpenRouter
  2. Sign up for an account
  3. Navigate to your dashboard to find your API key

What models does OpenRouter support in Discourse AI?

Discourse AI currently supports all the models on OpenRouter. The following models are available as one-click presets:

  • DeepSeek V3.2 (up to 163k tokens)
  • Moonshot Kimi K2.5 (up to 262k tokens)
  • xAI Grok 4 Fast (up to 131k tokens)
  • MiniMax M2.5 (up to 196k tokens)
  • Z-AI GLM-5 (up to 204k tokens)
  • Arcee Trinity Large - Free (up to 128k tokens)

Any other model available on OpenRouter can be configured manually.

Using API keys for Discourse AI

  1. Go to AdminPluginsAILLMs tab
  2. You will see OpenRouter models listed as preset templates. Click the Set up button next to the model you want to configure. This will pre-fill the provider, model ID, endpoint, and tokenizer for you — just enter your API key.

Manual configuration for other models

To use an OpenRouter model that is not listed as a preset:

  1. Click on the Set up button on “Manual configuration”
  2. Configure the following settings:
  • Provider: Select “OpenRouter”
  • Model ID: Enter the model ID (e.g., “deepseek/deepseek-v3.2”)
  • API Key: Your OpenRouter API key
  • Endpoint URL: https://openrouter.ai/api/v1/chat/completions
  • Tokenizer: Use the OpenAiTokenizer

Advanced Configuration Options

OpenRouter in Discourse AI supports additional configuration options:

  1. Provider Order: You can specify the order of providers to try (comma-separated list)
    Example: “Google, Amazon Bedrock”

  2. Provider Quantizations: Specify the quantization preferences (comma-separated list)
    Example: “fp16,fp8”

  3. Disable Streaming: Enable this option if you want to disable streaming responses from the model.

Should I disable native tool support or not?

Some models on open router do not support native tools, in those cases you should disable native tools so you can still get XML based tools. Additionally performance varies so you should test if XML or native tools give you better results for a model.

  • Enable “Disable native tools” to use XML-based tool implementation
  • Leave it disabled to use native tool implementation
7 лайков