管理 AI 凭证

:bookmark: This guide covers creating, managing, and using AI credentials (shared secrets) across LLMs, embedding models, and custom AI tools in the Discourse AI plugin.

:person_raising_hand: Required user level: Administrator

Summary

AI credentials provide a centralized, secure way to manage authentication secrets — such as API keys — across your Discourse AI configuration. Instead of pasting raw API keys into each LLM model, embedding definition, or custom tool individually, you create a credential once and reference it wherever it’s needed.

When you rotate a key (e.g., your OpenAI API key), you update it in one place and every LLM, embedding, or tool that uses the credential picks up the change automatically.

This documentation covers:

  • Creating and managing credentials
  • Linking credentials to LLMs and embedding models
  • Using credentials in custom AI tools
  • Deletion protection and credential lifecycle
  • The API for managing credentials programmatically

What is a credential?

A credential is a named, reusable secret stored centrally in Discourse AI. It has two main fields:

Field Description
Name A unique, human-readable label (e.g., “OpenAI API Key”). Maximum 100 characters.
Value The actual secret (API key, token, etc.). Maximum 10,000 characters.

A credential can be referenced by three types of entities:

  • LLM models — as the primary API key
  • Embedding definitions — as the primary API key
  • Custom AI tools — as named secret bindings accessed from JavaScript

Additionally, certain LLM provider parameters of the “secret” type (e.g., access_key_id for AWS Bedrock) also reference credentials.

Creating and managing credentials

Accessing the credentials page

Navigate to AdminPluginsDiscourse AICredentials, or visit /admin/plugins/discourse-ai/ai-secrets directly.

[screenshot placeholder: credentials list page showing name, used-by, and edit button columns]

Creating a new credential

  1. On the credentials page, click “New credential”.
  2. Enter a Name for the credential (e.g., “OpenAI API Key”).
  3. Enter the Value — the actual API key or token. This field is displayed as a password input.
  4. Click “Save”.

[screenshot placeholder: credential editor form with name and value fields]

:information_source: You can also create credentials inline while configuring an LLM, embedding, or tool. A modal dialog lets you add a new credential without leaving the current page, and the credential immediately appears in the selector dropdown.

Editing a credential

  1. On the credentials list page, click “Edit” next to the credential.
  2. Update the Name or Value as needed.
  3. Click “Save”.

When viewing an existing credential, the secret value is masked (********) in the list view. The actual value is only shown on the individual credential’s edit page.

Deleting a credential

A credential cannot be deleted if it’s currently referenced by any LLM, embedding, or tool. If you attempt to delete a credential that is in use, the interface shows a message listing the entities that reference it, with links to their edit pages.

To delete a credential:

  1. First, remove or reassign all references to the credential from any LLMs, embeddings, or tools.
  2. Return to the credential’s edit page.
  3. Click “Delete” and confirm the action.

Linking credentials to LLMs and embeddings

LLM models

When configuring an LLM model on the LLM settings page, you can select an existing credential from a dropdown instead of pasting an API key directly. At runtime, the model resolves the secret from the linked credential.

For provider-specific secrets — such as AWS Bedrock’s access_key_id — the credential’s ID is stored inside the provider parameters and resolved transparently when the model makes API requests.

Embedding definitions

Embedding models work the same way. When configuring an embedding definition, select a credential from the dropdown. The embedding model validates that either a credential or an inline API key is present, and uses the credential’s value at runtime.

Using credentials in custom AI tools

Custom AI tools use a contract and binding pattern for secrets, which keeps the tool definition portable (exportable/importable) while secrets remain site-local.

Step 1: Declare secret contracts

When creating or editing a tool, you declare which secrets the tool requires by adding entries to its credential contracts. Each entry has an alias — a simple identifier using letters, numbers, and underscores.

On the tool editor page, click “Add credential” to add a new contract entry and give it an alias name, for example external_api_key.

[screenshot placeholder: tool editor showing credential alias fields and credential selectors]

Alias names must match the pattern [a-zA-Z0-9_] and be unique within the tool.

Step 2: Bind credentials to aliases

Next to each declared alias on the tool configuration page, select an existing credential from the dropdown. This creates a binding between the alias and the credential.

The binding is validated to ensure:

  • The selected credential exists
  • The alias is declared in the tool’s contracts

Step 3: Access secrets at runtime in JavaScript

Inside a tool’s JavaScript, access secrets using the secrets.get() API:

function invoke(params) {
  const apiKey = secrets.get("external_api_key");

  const result = http.get("https://api.example.com/data", {
    headers: { "Authorization": "Bearer " + apiKey }
  });

  return JSON.parse(result.body);
}

Replace external_api_key with the alias name you declared in the tool’s credential contracts.

:warning: All declared aliases must have credential bindings before the tool can run. If any bindings are missing, execution is blocked with an error message listing the unbound aliases.

Example: a tool with multiple credentials

Suppose you’re building a tool that calls two different APIs. Declare two credential contracts:

Alias Description
weather_api_key Key for the weather data API
geocode_api_key Key for the geocoding API

Then bind each alias to the appropriate credential on the tool configuration page.

In the script:

function invoke(params) {
  const weatherKey = secrets.get("weather_api_key");
  const geocodeKey = secrets.get("geocode_api_key");

  const location = http.get(
    "https://geocode.example.com/search?q=" + encodeURIComponent(params.city),
    { headers: { "X-Api-Key": geocodeKey } }
  );
  const coords = JSON.parse(location.body);

  const forecast = http.get(
    "https://weather.example.com/forecast?lat=" + coords.lat + "&lon=" + coords.lon,
    { headers: { "Authorization": "Bearer " + weatherKey } }
  );

  return JSON.parse(forecast.body);
}

Tracking credential usage

Each credential tracks where it’s referenced. On the credentials list page, the “Used by” column shows links to every LLM, embedding, or tool currently using that credential.

This visibility helps you:

  • Understand the impact before rotating or updating a secret
  • Identify unused credentials that can be safely removed
  • Quickly navigate to the entities that depend on a credential

API reference

All endpoints require admin authentication and are under the /admin/plugins/discourse-ai/ai-secrets path.

Method Path Description
GET /ai-secrets List all credentials (values masked)
GET /ai-secrets/:id Show a single credential (value unmasked)
POST /ai-secrets Create a new credential
PUT /ai-secrets/:id Update a credential
DELETE /ai-secrets/:id Delete a credential (returns 409 if in use)

Request body for create and update:

{
  "ai_secret": {
    "name": "OpenAI API Key",
    "secret": "sk-..."
  }
}

All create, update, and delete operations are logged to the staff action log. Secret values are treated as sensitive and are never written to logs.

Automatic migration from inline API keys

Existing installations that previously used inline API keys are automatically migrated. The migration:

  1. Reads all non-seeded LLM models and embedding definitions that have an inline API key.
  2. Deduplicates by API key and provider — if two models share the same key and provider, they receive a single credential.
  3. Creates credential records with auto-generated names like “OpenAI API Key”, “AWS Bedrock API Key”, etc.
  4. Updates the LLM model and embedding definition records to reference the new credentials.
  5. Handles AWS Bedrock access_key_id values in provider parameters — extracting the raw key, creating a credential, and replacing the inline value with the credential’s ID.

This migration runs automatically on upgrade and is irreversible. No manual action is required.

Common issues and solutions

“This credential is currently in use and cannot be deleted”

This means one or more LLMs, embeddings, or tools reference the credential. Check the “Used by” column on the credentials list page to identify and reassign or remove those references before deleting.

“Missing required credential bindings” when running a tool

All credential aliases declared in a tool’s contracts must have bindings. Open the tool’s edit page, verify each alias has a credential selected in the dropdown, and save.

Credential value shows as ********

This is expected behavior. Secret values are masked in list views for security. To view or edit the actual value, click “Edit” on the specific credential.

Rotated a key but AI features still fail

After updating a credential’s value, verify that the LLM test (on the LLM settings page) passes. If the new key has different permissions or belongs to a different account, check the provider’s configuration requirements.

FAQs

Can I still use inline API keys instead of credentials?
Legacy inline API keys continue to work for existing configurations. However, credentials are the recommended approach because they simplify key rotation and reduce duplication.

Are credential values encrypted at rest?
Credential values are stored in the database. They follow the same security model as other sensitive Discourse data. Ensure your database is properly secured and that backups are handled appropriately.

What happens when I import a tool that uses credentials?
Tool imports include the credential contract aliases but not the actual secret values. After importing a tool, you’ll need to create or select credentials for each declared alias on the tool’s configuration page.

Can I share a single credential across multiple LLMs?
Yes. Multiple LLMs and embeddings can reference the same credential. This is particularly useful when you use the same provider API key across several model configurations.

Additional resources

2 个赞