Configuring OpenRouter language models

:bookmark: This guide explains how to configure API keys for OpenRouter to enable Discourse AI features that require 3rd party LLM keys.

:person_raising_hand: Required user level: Administrator

In this guide, we’ll configure OpenRouter - a service that provides access to multiple large language models through a unified API.

Note: You will require a plan and configured API key from OpenRouter

Obtain API keys

  1. Visit OpenRouter
  2. Sign up for an account
  3. Navigate to your dashboard to find your API key

What models does OpenRouter support in Discourse AI?

Discourse AI currently supports all the models on OpenRouter, including:

  • Llama 3.3 70B (up to 128k tokens)
  • Gemini Flash 1.5 Exp (up to 1M tokens)

And many other models available through the OpenRouter platform.

Using API keys for Discourse AI

  1. Go to AdminPluginsAILLMs tab
  2. Click on the Set up button on “Manual configuration”
  3. Configure the following settings:
  • Provider: Select “OpenRouter”
  • Model ID: Enter the model ID (e.g., “meta-llama/llama-3.3-70b-instruct”)
  • API Key: Your OpenRouter API key
  • Endpoint URL: https://openrouter.ai/api/v1/chat/completions
  • Tokenizer: Use the OpenAiTokenizer

Advanced Configuration Options

OpenRouter in Discourse AI supports additional configuration options:

  1. Provider Order: You can specify the order of providers to try (comma-separated list)
    Example: “Google, Amazon Bedrock”

  2. Provider Quantizations: Specify the quantization preferences (comma-separated list)
    Example: “fp16,fp8”

Should I disable native tool support or not?

Some models on open router do not support native tools, in those cases you should disable native tools so you can still get XML based tools. Additionally performance varies so you should test if XML or native tools give you better results for a model.

  • Enable “Disable native tools” to use XML-based tool implementation
  • Leave it disabled to use native tool implementation
4 Likes