Enable AI Search

This topic covers the configuration of AI Search feature from the Embeddings module of the Discourse AI plugin.


Similar to Related Topics, AI Search helps you find the most relevant topics using semantic textual similarity that are beyond an exact keyword match used by traditional search. This results in the discovery of topics that are non-exact matches but still relevant to the initial search. If you can’t find what you’re looking for, AI search is here to help!

The following is an example, note: the current search is about “AI Search”


  • Semantic textual similarity: going beyond just a keyword match and using semantic analysis to find textual similarity
  • Toggled in full-page search
  • Applicable to both anonymous and logged-in users


Currently, we are offering this feature for our self-hosted and hosted by Discourse customers on the Enterprise plan.

Enabling AI Search


In order to use AI Search you will need a provider for Embeddings and a provider for Large Language Model (LLM), both of which are required.


Follow the guide at Enable Related Topics

Large Language Model (LLM)

Currently, both hosted by Discourse and self-hosted customers will need one of the options below:


The following instructions would apply to any Discourse instance (hosted/self-hosted)

  1. Go to Admin settings-> Plugins → search or find discourse-ai and make sure it’s enabled
  2. Enable ai_embeddings_enabled for the Embeddings module needed for AI Search
  3. Enable ai_embeddings_semantic_search_enabled to activate the AI Search feature

Technical FAQ

Architecture Diagram
    User->>+Discourse: Search "gamification" 
    Discourse->>+LLM: Create an article about "gamification" in a forum about<br>  "Discourse, an open source Internet forum system."
    LLM->>+Discourse: Gamification involves applying game design elements like<br> points, badges, levels, and leaderboards to non-game contexts...
    Discourse->>+EmbeddingsAPI: Generate Embeddings for "Gamification involves applying game design..."
    EmbeddingsAPI->>+Discourse: [0.123, -0.321...]
    Discourse->>+PostgreSQL: Give me the nearest topics for [0.123, -0.321...]
    PostgreSQL->>+Discourse: Topics: [1, 5, 10, 50]
    Discourse->>+User: Topics: [1, 5, 10, 50]
  • How does AI Search work?
    • The initial search query is run through an LLM which creates a hypothetical topic/post. Afterwards, Embeddings is done on that post and then it searches your site for similar matches to the search query.
  • How is topic/post data processed?
    • Hosted by Discourse: LLM data is processed by a 3rd party provider, please refer to your specific provider for more details. By default, the Embeddings microservice is ran alongside other servers that host your existing forums. There is no third party involved here, and that specific information never leaves your internal network in our virtual private datacenter.
  • Where does the data go?
    • Hosted by Discourse/Self-hosted: Hypothetical topic/post created by the LLM provider is temporarily cached alongside the Embeddings for that document. Embeddings data is stored in the same database where we store your topics, posts and users, It’s another data table in there.
  • What does the Embeddings “semantic model” look like? How was it “trained”, and is there a way to test that it can accurately apply to the topics on our “specialized” communities?
    • Hosted by Discourse: By default we use and recommend this model. We have this deployed to many customers, and found that it performs well for both niche and general communities. If the performance isn’t good enough for your use case, we have more complex models ready to go, but in our experience, the default option is a solid choice


  • AI Search does not always find topics with 100% relevancy

I noticed a minor UI bug for ai embeddings semantic search hyde model. Steps to replicate

  1. Install AI Discourse plugin
  2. Open settings → Configure Gemini key
  3. Enable i embeddings semantic search enabled
  4. ai embeddings semantic search hyde model shows Google - gemini-pro (not configured)

The not configured doesn’t go away until after all the configurations are enabled and the page is refreshed thereafter.


I think this is a limitation of our site settings page so apologies for that and glad you were able to get it sorted out

1 Like

A question about semantics. In some AI modules I see a reference to using Gemini while in others I see a reference to Gemini-Pro. Are these referring to different models (Gemini Nano, Pro and Ultra) or do they refer to the same LLM? If so then what does Gemini itself refer to and does it matter if one has a paid or a free subscription to Gemini?

1 Like

There are different Gemini models such as the ones you’ve pointed out. Depending on the one you have (likely to pro since its free right now) you would just plugin the API key in the relevant setting. The setting is for whatever Gemini model you have

This would depend on you and how you want to use Gemini, but either should work

More on this here