This guide explains how to enable and configure the AI search feature, which is part of the Discourse AI plugin.
Required user level: Administrator
Similar to Related topics, AI search helps you find the most relevant topics using semantic textual similarity that are beyond an exact keyword match used by traditional search. This results in the discovery of topics that are non-exact matches but still relevant to the initial search. If you can’t find what you’re looking for, AI search is here to help!
Features
- Semantic textual similarity: going beyond just a keyword match and using semantic analysis to find textual similarity
- AI quick search: automatically adds AI results in the search menu popup when few regular results are found (enable with
ai_embeddings_semantic_quick_search_enabled) - Toggled on/off for AI search in full-page search
- Optional HyDE (Hypothetical Document Embeddings): uses an LLM to expand queries for better results
- Results indicated by
icon - Applicable to both anonymous and logged-in users
Enabling AI Search
Prerequisites
To use AI Search you will need Embeddings configured. A Large Language Model (LLM) is optionally needed if you enable HyDE (Hypothetical Document Embeddings) for improved search quality.
Embeddings
If you are on our hosting, we will provide a default option. For self-hosters follow the guide at Discourse AI - Embeddings
Large Language Model (LLM) (optional — for HyDE)
An LLM is only required if you enable the ai_embeddings_semantic_search_use_hyde setting, which uses an LLM to create a hypothetical document from the search query before embedding it. This can improve result quality but adds latency and cost.
To get started you can configure them through the Discourse AI - Large Language Model (LLM) settings page.
- OpenAI
- Anthropic
- Azure OpenAI
- AWS Bedrock with Anthropic access
- Self-Hosting an OpenSource LLM for DiscourseAI
- Google Gemini
Configuration
- Go to
Admin→Plugins→Discourse AI→Features→Embeddingsto find all AI search settings - Enable
ai_embeddings_enabledfor Embeddings - Enable
ai_embeddings_semantic_search_enabledto activate AI search on the full-page search - Optionally enable
ai_embeddings_semantic_quick_search_enabledto add AI results in the search menu popup - Optionally enable
ai_embeddings_semantic_search_use_hydeto use HyDE for improved results (requires an LLM)
Technical FAQ
Expand for an outline of the AI search logic (with HyDE enabled)
mermaid height=255,auto
sequenceDiagram
User->>+Discourse: Search "gamification"
Discourse->>+LLM: Create an article about "gamification" in a forum about<br> "Discourse, an open source Internet forum system."
LLM->>+Discourse: Gamification involves applying game design elements like<br> points, badges, levels, and leaderboards to non-game contexts...
Discourse->>+EmbeddingsAPI: Generate Embeddings for "Gamification involves applying game design..."
EmbeddingsAPI->>+Discourse: [0.123, -0.321...]
Discourse->>+PostgreSQL: Give me the nearest topics for [0.123, -0.321...]
PostgreSQL->>+Discourse: Topics: [1, 5, 10, 50]
Discourse->>+User: Topics: [1, 5, 10, 50]
How does AI Search work?
- When HyDE is enabled (
ai_embeddings_semantic_search_use_hyde), the search query is run through an LLM which creates a hypothetical topic/post. Embeddings are then generated from that hypothetical post and used to search your site for similar matches. When HyDE is disabled (the default), the search query is directly embedded and used for similarity matching. In both cases, the results are merged with regular search results using Reciprocal Rank Fusion (RRF) to re-rank the top results.
How is topic/post data processed?
- When HyDE is enabled, LLM data is processed by a 3rd party provider; please refer to your specific provider for more details. By default, the Embeddings microservice is ran alongside other servers that host your existing forums. There is no third party involved here, and that specific information never leaves your internal network in our virtual private datacenter.
Where does the data go?
- When HyDE is enabled, a hypothetical topic/post created by the LLM provider is temporarily cached alongside the Embeddings for that document. Embeddings data is stored in the same database where we store your topics, posts and users, It’s another data table in there.
What does the Embeddings “semantic model” look like? How was it “trained”, and is there a way to test that it can accurately apply to the topics on our “specialized” communities?
- By default we use pre-trained open source models, such as this one. We have deployed to many customers, and found that it performs well for both niche and general communities. If the performance isn’t good enough for your use case, we have more complex models ready to go, but in our experience, the default option is a solid choice.
Last edited by @Saif 2025-02-13T19:43:12Z
Last checked by @hugh 2024-08-06T04:44:33Z
Check document
Perform check on document: