Forum Researcher AI Persona guide

:bookmark: This guide explains the Forum Researcher persona in Discourse AI, how it works, and how to configure it for in-depth forum content analysis.

:person_raising_hand: Required user level: Administrator (to enable and configure), All users (to interact, if granted access)

Understanding and using the Forum Researcher persona

The Discourse AI plugin includes the Forum Researcher persona, a powerful tool designed for conducting in-depth research on the content within your forum. This persona can help you uncover insights, summarize discussions, and analyze trends across your community.

Summary

This document will cover:

  • How the Forum Researcher persona functions.
  • Steps to configure the Forum Researcher.
  • Best practices for interacting with the persona.
  • The distinction between the Forum Researcher and standard forum helper tools.
  • Guidance on selecting an appropriate Large Language Model (LLM).
  • Debugging tips for research tasks.
  • Current limitations of the persona.

How it works

The Forum Researcher persona uses a dedicated Researcher tool. This tool is engineered to:

  1. Access forum content: It can read through various sections of your forum.
  2. Apply advanced filters: A flexible filter system allows the tool to target relevant information precisely. You can specify content by:
    • Specific categories (e.g., category:support)
    • Users or groups (e.g., usernames:sam,jane, group:moderators)
    • Keywords in posts or topic titles (e.g., keywords:regression,bug, topic_keywords:"feature request")
    • Date ranges for posts or topics (e.g., after:2024-01-01 before:2024-06-30)
    • Topic status (e.g., status:open, status:closed)
    • Post type (e.g., post_type:first)
    • Filters can be combined using AND logic (space-separated) or OR logic (using OR between filter groups). For example: category:bugs status:open after:2024-05-01 OR tag:critical usernames:sally.
  3. Analyze content with Large Language Models (LLMs): After retrieving the filtered content, it uses an LLM to analyze the information, extract insights, and answer your specific questions or achieve your research goals.
  4. Follow a structured process: To ensure efficiency and accuracy, especially considering potential costs, the Forum Researcher is designed to:
    • Understand: It will work with you to clarify your research goals at the beginning.
    • Plan: Based on your goals, it designs a comprehensive research approach using the available filters.
    • Test (Dry Run): Before executing the full analysis, the persona typically performs a “dry run.” This involves calculating how many posts match your filter criteria without immediately processing them with the LLM. The persona will then inform you of this count.
    • Refine: Based on the dry run results, if the number of posts is too large (risking high costs or overly broad results) or too small (potentially missing key information), the persona can help you adjust the filters.
    • Execute: Once you confirm the scope is appropriate (after the dry run), the persona runs the final analysis, sending the content to the LLM.
    • Summarize: It presents the findings, typically using Discourse Markdown, with links back to the original forum posts and topics as supporting evidence.

This methodical approach means you can ask the researcher to perform tasks like:

  • “Summarize the most frequently discussed unresolved bugs in the ‘mobile-app’ category from the last quarter, and identify any proposed solutions or workarounds mentioned in the discussions.”
  • “Help me identify the main arguments for and against the ‘New User Onboarding’ proposal topic (link), and list the key proponents of each side.”
  • “Review activity by the ‘documentation-team’ group in the past year and provide a report on their key contributions to how-to articles, highlighting any tutorials that received significant positive feedback.”

Configuring the Forum Researcher

The Forum Researcher is disabled by default because its usage can incur LLM costs.

  1. Enable Persona: Activate it by navigating to Admin → AI → Personas.
  2. Control Access: It is strongly recommended to limit this persona to specific groups to manage LLM costs. You can also use AI quotas for finer control.

Once enabled, the tool has several configuration options:

  • LLM: Select a specific LLM for research. This defaults to the bot’s current LLM. This option allows you to balance quality and cost.
  • Maximum number of results: This limits the number of posts processed per query to control costs. The default is 1000.
  • Include private: This allows searching in secure categories and private messages, using the interacting user’s permissions.
  • Maximum tokens per post: This truncates long posts to save token costs. It defaults to 2000 tokens, with a minimum of 50.
  • Maximum tokens per batch: This controls the data chunk size sent to the LLM. It’s useful for LLMs with large context windows or to maintain focus. If set below 8000, it defaults to the LLM’s maximum prompt tokens minus a 2000 token buffer.

Best practices for interaction

To get the most out of the Forum Researcher while managing costs:

  • Be specific with goals: Clearly define what you want to find out before you start. The persona works best when it has precise objectives.
  • Confirm scope after dry run: The persona will typically perform a ‘dry run’ first and inform you how many posts it found based on your request. Pay close attention to this number. If it’s too high (risking high costs or unfocused results) or too low (potentially missing crucial information), discuss refining your filters with the persona before committing to the full analysis.
  • Iterate on filters: If the initial dry run isn’t targeting the right information, work with the persona to adjust filter criteria. Add more specific keywords, narrow date ranges, or specify categories/tags.
  • Consolidate queries: The persona is designed to handle multiple related goals in a single research execution. Try to group related questions into one comprehensive research request to the persona.

Relationship to standard forum helper and related tools

The Forum Researcher persona is distinct from a general Forum Helper that uses standard tools like Search and Read.

  • Standard Search and Read Tools:

    • The Search tool primarily identifies relevant topics. It does this by matching keywords against post content and other criteria (tags, categories, etc.). For each matching topic, it returns a link and a brief snippet from a relevant post, not the full post content.
    • The Read tool is used to access the full content of a specific topic (or selected posts within it) that Search has identified.
    • These tools work in tandem for targeted retrieval: Search finds topics, Read digests their content.
  • Forum Researcher’s researcher Tool:

    • Direct, deep content analysis: The researcher tool doesn’t just identify topics; it directly processes and analyzes the full content of potentially many posts (up to its configured Maximum number of results) that match its comprehensive filter criteria.
    • Advanced filtering and synthesis: It uses a more complex filtering language to build a dataset of posts from across the forum (potentially spanning hundreds of topics), and then synthesizes information from this entire dataset to answer complex questions. This is fundamentally different from reading individual topics one by one.

In essence, while a Forum Helper uses Search to pinpoint topics (presenting snippets) and Read to delve into one, the Forum Researcher conducts broad analysis across the actual text of many posts simultaneously to uncover deeper, synthesized insights.

What LLM should I use?

LLM technology is rapidly evolving, with models continually improving in capability and cost-effectiveness. During the development of the Forum Researcher, models like Gemini 2.5 Flash, Gemini 2.5 Pro, GPT-4.1, and Claude 4 Sonnet provided excellent results for complex research plans.

The best choice depends on your specific needs:

  • High-quality, nuanced analysis: More advanced models might be preferable, though they usually come with higher costs.
  • Broad overviews or cost-sensitive tasks: Faster, more economical models can be very effective.

Here are some point-in-time examples from internal testing at Discourse for a very specific, complex query:

Look at the top 1000 open topics in the feature category - ordered by like (first post only) - all time … make me an executive report of the:

  • Top 20 features CDCK should build
  • Easiest 20 features CDCK could build
  • Obvious duplicates
  • Things that are very poorly defined

ask me no more questions, just run the research

  1. Gemini 2.0 Flash Example
  2. Gemini 2.5 Flash (with thinking) Example
  3. GPT-4.1 Example
  4. Claude 4 Sonnet Example
  5. Gemini 2.5 Pro Example

Hybrid example: Driver is Gemini 2.5 Pro and Researcher LLM is Gemini 2.0 Flash
Hybrid example

Debugging research

In Discourse, you can enable advanced AI debugging by adding groups to the ai_bot_debugging_allowed_groups site setting. With that in place, you are able to see the actual payloads sent to the LLM.

Limitations

Currently, there is no option to send images to the research LLM. This will be considered in future versions.

FAQs

  • Is the Forum Researcher available on all Discourse plans?
    The Forum Researcher is part of the Discourse AI plugin, which is available for self-hosted sites and on our Enterprise hosting plan.

  • Can the Forum Researcher access content from private categories or messages?
    Yes, if the “Include private” option is enabled in its configuration and the user interacting with the persona has the necessary permissions to access those areas.

  • How can I control the cost of using the Forum Researcher?

    • Limit access to specific, trusted groups.
    • Use the “Maximum number of results” and “Maximum tokens per post” settings to cap processing.
    • Choose cost-effective LLMs.
    • Pay close attention to the “dry run” estimates before executing full research.
    • Utilize AI quotas.

Additional resources

Last edited by @hugh 2025-06-05T05:46:29Z

Last checked by @hugh 2025-06-05T05:46:42Z

Check documentPerform check on document:
6 Likes