This topic covers the configuration of the Embeddings module of the Discourse AI plugin.
Feature set
The Embeddings modules can automatically generate embeddings of every new post in your Discourse instance. Those are used by our semantic features, like semantic suggested topics and semantic topic search.
When configured, this module will add a “Related Topics” section to the bottom of all topic pages, where topics who are similar to the current page will be linked, helping users find related discussions to what they are currently reading on:
Open Source: a collection of open source models from SBERT. Recommended and default.
OpenAI: Uses OpenAI API to generate embeddings. You will need a working OpenAI API key for this.
Settings
ai_embeddings_enabled: Enables or disables the module
ai_embeddings_discourse_service_api_endpoint: URL where the API is running for the module. If you are using CDCK hosting this is automatically handled for you. If you are self-hosting check the self-hosting guide.
ai_embeddings_discourse_service_api_key: API key for the API configured above. If you are using CDCK hosting this is automatically handled for you. If you are self-hosting check the self-hosting guide.
ai_embeddings_models: Every model enabled here be used to generate embeddings from user posts.
ai_embeddings_semantic_suggested_model: Model that will be used to semantic suggested topics. Model picked here must be enabled in the ai_embeddings_models setting too.
ai_embeddings_generate_for_pms: If PMs should also have embeddings generated automatically when created.
ai_embeddings_semantic_related_topics_enabled: Shows related topics at the bottom of the topic page. This will add an extra block between the topic last post and the suggested topics with topics who are semantic related to the current topic.
ai_embeddings_pg_connection_string: Database connection string for the PostgreSQL instance that will store embeddings. If you are using CDCK hosting this is automatically handled for you. If you are self-hosting check the self-hosting guide.
ai_openai_api_key: OpenAI API key. Necessary if you want to use OpenAI to generate embeddings. If so, don’t forget to pick text-embedding-ada-002 in ai_embeddings_models and in ai_embeddings_semantic_suggested_model.
Great work, thanks first of all, but I can’t see similar topics under the topics, somehow, my settings are like this, I added an openai key. Semantic search works, but how can I show similar articles under topics?
How are the jobs to generate embeddings scheduled? From the code it seems like embeddings are only generated when the page is viewed and embeddings are missing. Is there a way to generate embeddings for the whole site when turning the feature on?
Sometimes reading a topic one knows most of the noted background but there are also some mentions that are not known. While there is summarization for summarizing an entire topic up to that point what would also be of help would be an AI option that inserts a glossary for the topic as a post near the top and updates it if a user selects a word or phrase that it wants the AI to include in the glossary.
Today in reading this topic there was one reference I did not recognize so looked it up and added a reply with a reference for it. While I know the remaining references I am sure there are others, especially those new to LLMs and such, that would have no idea of many of the noted references and if the AI could help them they would visit the site much more often.
While I know what RAG means in this starting post, how many really know that?
Note: Did not know with which topic to post this but since it needed embeddings to work posted it here. Please move this if it makes more sense elsewhere or as the Discourse AI plugin changes.
Are embeddings the only variable when determining “Related Topics”? Or are there any other factors that are considered (e.g. author, topic score, topic age, category, etc)?
I’ve done a little bit testing — sorry, not very consistently, but using style like a hare between head lights of a car.
It can defenetly finnish too. I think there is more fundamentally issues of AI and minor language. And users.
First at all OpenAI doesn’t have enough material to handle finnish, but I’m sure that situation includes every languages where isn’t enough material that AI can steal use to learning. That means semantic is a way more difficult than other questions, and those are really difficult to Chat GPT when used other language than english or other major ones.
It looks like GPT-4 is more accurate than GPT-3.5-turbo. But when hits by 3.5 were just noise perhaps 8 times out of 10 and even Discourse could offer those 2 right ones just using purely tags, GPT-4 had something like 50% success ratio. And yes, those are stetson statistics.
Creating a search where semantic approach is… helpful, is actually quite difficult. For me anyway because I had expectations what I should get. So it is not only matter of real semantic searches, but more or less searching using not-accurate search sentence over list of search terms created from that sentence. Yes, I know — such one is a semantic search too.
My very weak point is semantic component works as it should, but issues are coming from limitations of AI itself and user’s too high expectations. And language other than english is not an issue per se.
But…
Semantic full page search is awful slow. Am I right if I’ll blame technical weakness of my VPS — not enough RAM, magical creatures etc? Because here it is fast.
Secondly… can we at some point offer AI-hits as default, over those generated by Discourse?
Just to keep things and topics together: I was very wrong. That has nothing to do with 3.5 and 4. The reason was acting of semantic search on mobiles. It starts searching after three characters and then the result is very wrong. When advanced filter is opened, or search button is clicked if I’m remembering right, AI will do new search and updating results — and then the ”hit ratio” is closer to right.