Discourse AI - AI usage

:bookmark: This guide covers the AI usage page which is part of the Discourse AI plugin.

:person_raising_hand: Required user level: Administrator

The AI usage page is designed for admins to understand how the community is using Discourse AI features over time. This can be helpful in Estimating costs of using LLMs for Discourse AI.

Features

  • Time horizon: last 24 hours, last week, last month and custom date range
  • Selectable by Discourse AI feature
  • Selectable by enabled LLMs
  • Summary preview
    • Total requests: All requests made to LLMs through Discourse
    • Total tokens: All the tokens used when prompting an LLM
    • Request tokens: Tokens used when the LLM tries to understand what you are saying
    • Response tokens: Tokens used when the LLM responds to your prompt
    • Cache read tokens: Previously processed request tokens that the LLM reuses from cache to optimize performance and cost
    • Cache write tokens: Request tokens that were written to cache for potential future reuse
    • Estimated cost: Cumulative cost of all tokens used by the LLMs based on specified cost metrics added to LLM configurations
  • Interactable bar graph showcasing token usage
  • Token, usage count and estimated cost per Discourse AI feature
  • Token, usage count and estimated cost per LLM
  • Token, usage count and estimated cost per user

Enabling AI usage

Prerequisites

:information_source: For data to be populated you must configure at least one Large Language Model (LLM) from a provider and have members using the Discourse AI features.

To get started you can configure LLMs through the Discourse AI - Large Language Model (LLM) settings page.

Configuration

  1. Go to Admin settings-> PluginsAISettings tab and make sure its enabled (discourse ai enabled)
  2. Within the AI plugin navigate to the Usage tab

Technical FAQ

What are tokens and how do they work?

7 лайков