This guide covers the AI usage page which is part of the Discourse AI plugin.
Required user level: Administrator
The AI usage page is designed for admins to understand how the community is using Discourse AI features over time. This can be helpful in Estimating costs of using LLMs for Discourse AI.
Features
- Time horizon: 24 hours, last week and custom dates
- Selectable by Discourse AI feature
- Selectable by enabled LLMs
- Summary preview
- Total requests: All requests made to LLMs through Discourse
- Total tokens: All the tokens used when prompting an LLM
- Request tokens: Tokens used when the LLM is trying to understand what you are saying
- Response tokens: Tokens used when the LLM is responding to your prompt
- Cached tokens: Previously processed request tokens that the LLM uses to optimize performance and cost
- Interactable bar graph showcasing token usage
- Token and usage count per Discourse AI feature
- Token and usage count per LLM
Enabling AI usage
Prerequisites
For data to be populated you must configure at least one Large Language Model (LLM) from a provider and have members using the Discourse AI features.
To get started you can configure LLMs through the Discourse AI - Large Language Model (LLM) settings page.
- OpenAI
- Anthropic
- Azure OpenAI
- AWS Bedrock with Anthropic access
- HuggingFace Endpoints with Llama2-like model
- Self-Hosting an OpenSource LLM
- Google Gemini
- Cohere
Configuration
- Go to
Admin
settings->Plugins
→AI
→Settings
tab and make sure its enabled (discourse ai enabled
) - Within the AI plugin navigate to the
Usage
tab
Technical FAQ
What are tokens and how do they work?
- Tokens are the basic units that LLMs use to understand and generate text, usage data may affect costs. Check out Estimating costs of using LLMs for Discourse AI to learn more.
Last edited by @Saif 2025-01-23T20:01:28Z
Check document
Perform check on document: