Discourse AI - AI search

:bookmark: This guide explains how to enable and configure the AI search feature, which is part of the Discourse AI plugin.

:person_raising_hand: Required user level: Administrator

Similar to Related topics, AI search helps you find the most relevant topics using semantic textual similarity that are beyond an exact keyword match used by traditional search. This results in the discovery of topics that are non-exact matches but still relevant to the initial search. If you can’t find what you’re looking for, AI search is here to help!

An animated GIF showing a search for the term "AI search" and the results being displayed.

Features

  • Semantic textual similarity: going beyond just a keyword match and using semantic analysis to find textual similarity
  • AI quick search
  • Toggled on/off for AI search in full-page search
  • Results indicated by :sparkles: icon
  • Applicable to both anonymous and logged-in users

Enabling AI Search

Prerequisites

To use AI Search you will need Embeddings and a Large Language Model (LLM).

Embeddings

If you are on our hosting, we will provide a default option. For self-hosters follow the guide at Discourse AI - Embeddings

Large Language Model (LLM)

Discourse hosted customers and self-hosters must configure at least one Large Language Model (LLM) from a provider.

To get started you can configure them through the Discourse AI - Large Language Model (LLM) settings page.

Configuration

  1. Go to Admin settings-> Plugins → search or find discourse-ai and make sure it’s enabled
  2. Enable ai_embeddings_enabled for Embeddings
  3. Enable ai_embeddings_semantic_search_enabled to activate AI search

Technical FAQ

Expand for an outline of the AI search logic
mermaid height=255,auto
sequenceDiagram
    User->>+Discourse: Search "gamification" 
    Discourse->>+LLM: Create an article about "gamification" in a forum about<br>  "Discourse, an open source Internet forum system."
    LLM->>+Discourse: Gamification involves applying game design elements like<br> points, badges, levels, and leaderboards to non-game contexts...
    Discourse->>+EmbeddingsAPI: Generate Embeddings for "Gamification involves applying game design..."
    EmbeddingsAPI->>+Discourse: [0.123, -0.321...]
    Discourse->>+PostgreSQL: Give me the nearest topics for [0.123, -0.321...]
    PostgreSQL->>+Discourse: Topics: [1, 5, 10, 50]
    Discourse->>+User: Topics: [1, 5, 10, 50]

How does AI Search work?

  • The initial search query is run through an LLM which creates a hypothetical topic/post. Afterwards, Embeddings is done on that post and then it searches your site for similar matches to the search query. Finally, it uses Reciprocal Rank Fusion (RFF) to re-rank the top results in line with regular search.

I see an option for AI from quick search?

  • The AI quick search option performs AI search faster by skipping creating the hypothetical post. Sometimes this option is faster and provides more relevant results, other times it falls short.

How is topic/post data processed?

  • LLM data is processed by a 3rd party provider, please refer to your specific provider for more details. By default, the Embeddings microservice is ran alongside other servers that host your existing forums. There is no third party involved here, and that specific information never leaves your internal network in our virtual private datacenter.

Where does the data go?

  • A hypothetical topic/post created by the LLM provider is temporarily cached alongside the Embeddings for that document. Embeddings data is stored in the same database where we store your topics, posts and users, It’s another data table in there.

What does the Embeddings “semantic model” look like? How was it “trained”, and is there a way to test that it can accurately apply to the topics on our “specialized” communities?

  • By default we use pre-trained open source models, such as this one. We have deployed to many customers, and found that it performs well for both niche and general communities. If the performance isn’t good enough for your use case, we have more complex models ready to go, but in our experience, the default option is a solid choice.

Last edited by @Saif 2024-11-04T17:58:20Z

Last checked by @hugh 2024-08-06T04:44:33Z

Check documentPerform check on document:
6 Likes

I noticed a minor UI bug for ai embeddings semantic search hyde model. Steps to replicate

  1. Install AI Discourse plugin
  2. Open settings → Configure Gemini key
  3. Enable i embeddings semantic search enabled
  4. ai embeddings semantic search hyde model shows Google - gemini-pro (not configured)

The not configured doesn’t go away until after all the configurations are enabled and the page is refreshed thereafter.

2 Likes

I think this is a limitation of our site settings page so apologies for that and glad you were able to get it sorted out

1 Like

A question about semantics. In some AI modules I see a reference to using Gemini while in others I see a reference to Gemini-Pro. Are these referring to different models (Gemini Nano, Pro and Ultra) or do they refer to the same LLM? If so then what does Gemini itself refer to and does it matter if one has a paid or a free subscription to Gemini?

1 Like

There are different Gemini models such as the ones you’ve pointed out. Depending on the one you have (likely to pro since its free right now) you would just plugin the API key in the relevant setting. The setting is for whatever Gemini model you have

This would depend on you and how you want to use Gemini, but either should work

More on this here