RAG capacities of discourse-ai

So i just finished setting up discourse-ai, i wonder what are the RAG capacities of the AI ?
I see it can retrieve content from posts but i have to ask it multiple times before it really understand that the information should be found inside a topic.
Other features are working well !

A persona has an upload section, your can upload multiple text files to your persona.

see: Discourse AI - Personas

You will need to configure embedding though for this to work.

1 Like

Thanks, I saw that section, that’s great , but i still have a few questions.

Some context : we use discourse as a knowledge base and a forum to answer technical questions, we have all of our documentation on it.

We will use the upload section of the persona to feed the data that we do not put directly on the forum, like data from documents about our company

Now, for our technical doc which lives on the forum, we’d like to be able to ask question on it using an AI. From my understanding the discourse-AI chat is not made for this and as configured in the persona tools it will only do a search on the forum, plus some ai processing ?

Is it something that is planned to have a chatbot enabling RAG capabilities on the forum itself, including on the content of the topics ?

1 Like

This is all 100% supported today with a myriad of implementation options.

  1. The search tool can be scoped to a group of categories or tags (when you create a new persona and add the search tool)
  2. Custom tools provide extra flexibility here, you can make http requests to the same forum and consume anything from the forum in any format you want… including HTTP reqs to embedding search… see: API access to the embedding(s) for a post - #3 by sam. When making HTTP requests in a custom tool you can specify HTTP headers so you can use an API key you issue on the forum.
  3. The read tool allows you to read topics
  4. This work in progress PR will allow you to search your uploads directly from a tool. (FEATURE: RAG search within tools by SamSaffron · Pull Request #802 · discourse/discourse-ai · GitHub) which is yet another option.
  5. You can control modality (PM vs Chat) depending on your pref

You can see an example implementation at ask.discourse.com (which was designed as a support bot for our customers) - the most important thing is that it involves no custom plugin, it is all using the built in Discourse AI plugin.

5 Likes

Disclaimer: I’m a de facto end user and I don’t even understand how AIs really work. And I use OpenAI.

There are some reasons why AI isn’t giving the wanted answer.

  • Prompting dictates where and how it can search. One bad wording and it will do whatever it wants
  • AI isn’t like Google with steroids and skill to explain things, even if it kind of is. It can find the right hits as well, and I mean as badly, as Google. And it doesn’t actually read and analyze everything, but it just… thinks so.
  • RAG and embeddings work, but need extremely tight prompting. But those give only a direction, quite often not a steady base to build up an answer. Just embeddings need a lot of manual labor and quite often topics, again per se, are not enough. A topic or a post (even worse) can be accurate and logical enough, but out there in real life? No. That’s why ask.discourse.com fails quite often, if not asked a very limited and targeted question. How do I allow only specific email domains in registrations? Boom, you have the answer. How do I get notifications when a group PM-box has new messages? A lot of hallucination and wrong refs.

The most disturbing idea per OpenAI is that wrong answers are acceptable. It is a matter of amount and specifically how much those hallucinated and factually wrong answers will cost to a company.

Very true for companies, but really bad for that one user.

AIs can be very accurate. All that is needed is a lot of manpower to code and take care of that, and so much computing power that mining bitcoins is a cheap hobby.

My very weak point is that just dropping manuals into topics isn’t enough.

This is a very important insight: you are never really done with these types of systems.

We end up repeating the process of

  1. User asks AI and gets a bad answer
  2. We review
  3. Fix documentation, accept a correct answer, and delete search landmines
  4. Ask the same question and get a correct answer

These are not the type of systems you can deploy and forget about; they need constant tuning.

Note that it really helps us a ton if you thumbs down any bad answers

3 Likes

That’s very true. And there is really huge possibility that my prompting is really bad.

But… end users are using those bots and they aren’t good at writing high-quality questions, leading AI in the right direction to get what is needed. And then the knowledge that I gained today, although false, will lead to better quality at some point, doesn’t help too much.

I don’t know what my point is, except that building/training/tuning an AI that has a better than 80% hit rate needs more work and curated content than just publishing topics. And that work costs money (so hopefully your business will be growing, because I just love proofreading, even that functionality is massively off topic now).

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.