Discourse AI - AI Bot

This topic covers the configuration of AI Bot module of the Discourse AI plugin.

Feature set

The AI Bot allows for direct integration with generative AI powered by either OpenAI or Anthropic.

image

  • Replies from bot stream and can be stopped mid generation
  • Automatically titles PMs
  • Multiple team members can interact with a chat session

GPT-4 features

  • When configured it can generate images using stable diffusion
  • When configured it can search using Google
  • Is able to search through public content on the forum

Settings

  • ai_bot_enabled_chatbots: list of bots to enable
  • ai_bot_enabled: to enable the plugin
  • ai_bot_allowed_groups: groups with access to the chat bot
  • ai_helper_add_ai_pm_to_header: add robot to the top of the page to initiate messages
  • ai_stability_api_key: (optional) stable diffusion api key
  • ai_google_custom_search_api_key: (optional) google custom search api key

Providers

Caveats

GPT-3.5 or Claude only offer very shallow integration. Rich integration is only implemented in GPT-4.

The bot is an area of much experimentation and is changing rapidly.

Future work

  • Chat integration
  • Semantic search (currently uses traditional search)
  • Faster topic summaries
12 Likes

What are the differences between the official Discourse AI plugin and the Discourse Chatbot šŸ¤– (supporting ChatGPT) plugin in terms of AI bots and their features?

1 Like

@merefield plugin has been around for longer and has many more knobs to configure it. AI Bot is also a bit more ambitious (especially since we have GPT 4 access) in that we attempt to integrate it into the Discourse experience - it knows how to search and summarize topics , for example.

Notable differences as of today are probably

  • We stream replies and offer a stop button
  • @merefield offers a lot more settings to tune stuff
  • We offer a ā€œcommandā€ framework for getting the bot to act on your behalf - albeit experience is fairly flaky on GPT 3.5
  • @merefield offers discourse chat integration atm, we do not yet
  • We offer anthropic integration as well
8 Likes

How can we use it with Stable Difussion? I have the API but I don’t know how to prompt it (I tried to read that from the code but I didn’t make success).

"!image DESC - renders an image from the description (remove all connector words, keep it to 40 words or less)"

2 Likes

To add: From my tests, it looks like AI Bot only works in PM and Chatbot works everywhere, unless I’m doing something wrong with the AI bot.

Image generation and streaming are nicely done, as well as search API, however, it sometimes still falls back to ā€œI can’t search the web or can not generate imagesā€. Are you using something similar to LangChain agents, that decide what steps to take?

Are we supposed to create a CX with scope for the full web, or just our instance URL?

That is correct. We will probably get to wider integration, but are taking our time here and trying to polish the existing stuff first.

Yes, this is the very frustrating thing about GPT 3.5 vs 4. Grounding the model for 3.5 is just super duper hard.

I am considering having an intermediary step prior to replying in GPT 3.5 that first triages prior to actually responding (Eg: does this interaction INTERACTION look like it should result in a !command, if so which?) It would sadly add cost and delay so this is my last resort

We use a ā€œsort ofā€ langchain, limited to 5 steps, but we try to be very frugal with tokens so balance is hard.

Up to you… I like having access to all of Google, it is mighty handy

3 Likes

Draw me an image of a cat

Render an image of a cat

Both work for me… But yeah GPT 3.5 can certainly be annoying…

2 Likes

Is it possible to make a system that automatically names the titles of the opened topics?

Users do not pay much attention to these titles, but they are very important for Google.

2 Likes

Yes, it does that already :slight_smile:

But Google really does not care at all cause these are PMs, it has no access to them

3 Likes

What I do to ground 3.5 is adding a second, shorter system prompt lower in the final prompt to ā€œremindā€ the model of some of the rules in the main system prompt.

So it would look something like (typing from phone, trying…)

system role
user
assistant
…
…
system role ā€œreminderā€
new user prompt

Just by repeating the most important system role contents, the model adds more weight to it. I’ve been using this workaround for a few months now without too much strange responses.

Especially if prompts are becoming longer, the model tends to ā€œforgetā€ things that are higher in the final prompt. Things in AI are very hacky, it’s something I experience this in GPT models and langchain as well. Just today I got such a strong personality in langchain that the actions when asking the time in a random city, were ā€œchecking my watchā€, ā€œchanging the timezone of my watchā€ and ā€œask a strangerā€

1 Like

I actually said for normal forum topics

I follow but we do not do forum topics atm, only pms

2 Likes

Is this the current temperature for the AI bot?

I’m assuming you rely on a formatted LLM output to decide the next action to take. So this works way better with temperatures close to zero. This should help grounding 3.5 and should greatly improve results.

1 Like

Yeah I am working on splitting this into 2 prompts at the moment.

  1. For triage
  2. For answering

It is a rather big refactor of this code base but it will allow us to have 2 temps at play, and I think this grounds both Claude and GPT 3.5 from local testing.

We end up wasting one API call, but we save a tiny bit on the system prompt and may be able to shave off more.

Without a dedicated triage call I don’t think we have a chance with GPT 3.5

3 Likes

Maybe not within the scope, but it would be interesting to train a model on all the posts in my forum and use them to create an expert user AI bot that users could interact with, or that could answer questions from users on its own in threads, and link to/quote relevant posts from the past.

2 Likes

I hear you, but there are massive scalability issues here. Training is hellishly expensive and not even available on GPT 3.5 / 4.

The industry is pushing really really hard on

  1. Growing token numbers (eg: Anthropic with 100k token context)
  2. Vector databases for embeddings and leaning on embeddings for context
1 Like

Per:

I don’t want perfect to be the enemy of good here.

I changed it so all the extra fancy stuff like search integration and image generation is only implemented on GPT 4 which is able to properly deal with the very complicated prompt.

I have some ideas on bringing these features to GPT 3.5 / Claude as well, but in the interim the basics are mighty useful on the simpler models.

  • Multiple people can interact with LLMs in a single session (something that is not possible in chat.openai.com)
  • Stuff streams like it does in the official UIs and can be cancelled.
  • We get access to our Markdown engine so you can get it to draw mermaid diagrams and other fancy things.

So this is very useful for general purpose tasks on the simpler models today.

1 Like

Can we change the api request from a setting? That could enable kind of OpenAI self-hosted api models as well.

Related: integrations from n8n are working great (Discourse plus OpenAI, all customized and secure).

That’s third party but they are doing great job too.

1 Like

afaik self hosting open ai costs more than 10 arms and 10 legs, so affording resources to submit a PR here should be easy… totally open to have a PR that adds a site setting for this.

1 Like