Discourse AI looks amazing and I am super eager to set it up on my self-hosted instance!
One question I have (or perhaps a feature request) related to the helper bot and embeddings is: can I choose which topics are used for retrieval augmented generation (RAG)? For example, it would be amazing if I can configure the plugin only to compute embeddings for topics in my official docs categories. I fear that if the bot is building up a vector database using everything on our forum, the output will not be good enough. It would also be interesting to configure it only to compute embeddings for topics with specific tags or solved topics. I’m curious about the details regarding the RAG workflow. Does Discourse AI have a RAG workflow? Will we have the ability to control which documents get added to the vector database? If we already have a collection of embeddings, can we configure Discourse AI use them when calling the helper or semantic search?
I saw this briefly mentioned over here, but I’d love to know more details!
So the feature request here is to allow you to specify some additional params for various commands you add. I really like it, just need to think through the UI and data structures.
As far as I know, the Discourse AI plugin builds only a vector database using all posts on the forum, but this approach will be refined to allow users to specify which documents should be included. This will enable more granular control over the training data and improve the quality of generated responses.
Moreover, the ability to incorporate pre-computed embeddings is still being explored…