Discourse Chatbot 🤖 (Now smarter than ChatGPT!*)

Enjoying this plugin? Please :star: it on GitHub ! :pray:

What is it?

  • The original Discourse AI Chatbot!
  • Converse with the bot in any Topic or Chat Channel, one to one or with others!
  • Customise the character of your bot to suit your forum!
    • want it to sound like William Shakespeare, or Winston Churchill? can do!
  • The new “RAG Mode”* can now:
    • Search your forum** for answers so the bot can be an expert on the subject of your forum.
      • not just be aware of the information on the current Topic or Channel.
    • Search Wikipedia
    • Search current news*
    • Search Google*
    • Return current End Of Day market data for stocks.*
    • Do “complex” maths accurately (with no made up or “hallucinated” answers!)
  • Uses cutting edge Open AI API and functions capability of their excellent, industry leading Large Language Models.
  • Includes a special quota system to manage access to the bot: more trusted and/or paying members can have greater access to the bot!
  • Also supports Azure and proxy server connections.

*sign-up for external (not affiliated) API services required. Links in settings.

RAG mode is very smart and knows facts posted on your forum:

Basic bot mode can sometimes make mistakes, but is cheaper to run because it makes fewer calls to the Large Language Model:

(Sorry China! :wink: )

:biohazard: **Bot’s “vision” - what it can see (potentially share) and privacy :biohazard:

This bot can be used in public spaces on your forum. To make the bot especially useful there is RAG mode (one setting per bot trust level). This is not set by default.


In this mode the bot is, by default, privy to all content a Trust Level 1 user would see, working from this setting:


Thus, if interacted with in a public facing Topic, there is a possibility the bot could “leak” information if you tend to gate content at the Trust Level 0 or 1 level via Category permissions. This level was chosen because through experience most sites usually do not gate sensitive content at low trust levels but it depends on your specific needs.

This can be eliminated by:

  • only using the bot in normal mode (but the bot then won’t see any posts)
  • only allowing the bot to be used in Categories that require the set trust level or above to read.
  • mitigated with moderation

In addition, anything it can “see” gets shared with Open AI.

You can see that this setup is a compromise. In order to make the bot useful it needs to be knowledgeable about the content on your site. Currently it is not possible for the bot to selectively read members only content and share that only with members which some admins might find limiting but there is no way to easily solve the that whilst the bot is able to talk in public. Contact me if you have special needs and would like to sponsor some work in this space. Bot permissioning with semantic search is a non-trivial problem. The system is currently optimised for speed. NB Private Messages are never read by the bot.


  • May not work on mulit-site installs (not explicitly tested), but PR welcome to improve support :+1:
  • Open AI API response can be slow at times on more advanced models due to high demand. However Chatbot supports GPT 3.5 too which is fast and responsive and perfectly capable.
  • Is extensible and supporting other cloud bots is intended (hence the generic name for the plugin), but currently ‘only’ supports interaction with Open AI Large Language Models (LLM) such as “ChatGPT”. This may change in the future. Please contact me if you wish to add additional bot types or want to support me to add more. PR welcome.
  • Is extensible to support the searching of other content beyond just the current set provided.


Creating the Embeddings

Only necessary if you want to use the RAG type bot and ensure it is aware of the content on your forum, not just the current Topic.

This step is only required once, after you’ve installed the plugin. New or updated posts are automatically embedded.

Initially, we need to create the embeddings for all posts, so the bot can find forum information.

Enter the container:

./launcher enter app

and run the following rake command:

rake chatbot:refresh_embeddings[1]

which at present will run twice due to unknown reason (sorry! feel free to PR) but the [1] ensures the second time it will only add missing embeddings (ie none immediately after first run) so is somewhat moot.

In the unlikely event you get rate limited by OpenAI (unlikely!) you can complete the embeddings by doing this:

rake chatbot:refresh_embeddings[1,1]

which will fill in the missing ones (so nothing lost from the error) but will continue more cautiously putting a 1 second delay between each call to Open AI.

Compared to bot interactions, embeddings are not expensive to create, but do watch your usage on your Open AI dashboard in any case.

NB Embeddings are only created for Posts and only those Posts for which a Trust Level One user would have access. This seemed like a reasonable compromise. It will not create embeddings for posts from Trust Level 2+ only accessible content.

Error when you are trying to get an embedding for too many characters.

You might get an error like this:

OpenAI HTTP Error (spotted in ruby-openai 6.3.1): {"error"=>{"message"=>"This model's maximum context length is 8192 tokens, however you requested 8528 tokens (8528 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", "type"=>"invalid_request_error", "param"=>nil, "code"=>nil}}

This is how you resolve it …

As per your error message, the embedding model has a limit of:

8192 tokens

however you requested 8528

You need to drop the current value of this setting:


by about 4 x the diff and see if it works (a token is roughly 4 characters).

So, in this example, 4 x (8528 - 8192) = 1344

So drop chatbot_open_ai_embeddings_char_limit current value by 1500 to be safe. However, the default value was set according to a lot of testing for English Posts, but for other languages it may need lowering.

This will then cut off more text and request tokens and hopefully the embedding will go through. If not you will need to confirm the difference and reduce it further accordingly. Eventually it will be low enough so you don’t need to look at it again.

How To Switch Embeddings model

  • Change the setting chatbot_open_ai_embeddings_model to your new preferred model
  • It’s best to first delete all your current embeddings:
    • go into the container ./launcher enter app
    • enter the rails console rails c
    • run ::DiscourseChatbot::PostEmbedding.delete_all
    • exit (to return to root within container)
  • run rake chatbot:refresh_embeddings[1]
  • if for any Open AI side reason that fails part way through, run it again until you get to 100%
  • the new model is known to be more accurate, so you might have to drop chatbot_forum_search_function_similarity_threshold or you might get no results :). I dropped my default value from 0.8 to 0.6, but your mileage may vary.

Bot Type

Take a moment to read through the entire set of Plugin settings. The chatbot bot type setting is key, and there is one for each chatbot “Trust Level”:


RAG mode is superior but will make more calls to the API, potentially increasing cost. That said, the reduction in its propensity to ultimately output ‘hallucinations’ may facilitate you being able to drop down from GPT-4 to GPT-3.5 and you may end up spending less despite the significant increase in usefulness and reliability of the output. GPT 3.5 is also a better fit for the Agent type based on response times. A potential win-win! Experiment!

For Chatbot to work in Chat you must have Chat enabled.

Bot’s speed of response

This is governed mostly by a setting: ‎chatbot_reply_job_time_delay‎ over which you have discretion.

The intention of having this setting is to:

  • protect you from reaching rate limits of Open AI
  • protect your site from users that would like to spam the bot and cost you money.

It is now default ‘1’ second and can now be reduced to zero :racing_car: , but be aware of the above risks.

Setting this zero and the bot, even in ‘agent’ mode, becomes a lot more ‘snappy’.

Obviously this can be a bit artificial and no real person would actually type that fast … but set it to your taste and wallet size.


NB I cannot directly control the speed of response of Open AI’s API - and the general rule is the more sophisticated the model you set the slower this response will usually be. So GPT 3.5 is much faster that GPT 4 … although this may change with the newer GPT 4 Turbo model.


You must get a token from https://platform.openai.com/ in order to use the current bot. A default language model is set (one of the most sophisticated), but you can try a cheaper alternative, the list is here

There is an automated part of the setup: upon addition to a Discourse, the plugin currently sets up a AI bot user with the following attributes

  • Name: ‘Chatbot’
  • User Id: -4
  • Bio: “Hi, I’m not a real person. I’m a bot that can discuss things with you. Don’t take me too seriously. Sometimes, I’m even right about stuff!”
  • Group Name: “ai_bot_group”
  • Group Full Name: “AI Bots”

You can edit the name, avatar and bio (see locale string in admin → customize → text) as you wish but make it easy to mention.

It’s not free, so there’s a quota system, and you have to set this up

Initially no-one will have access to the bot, not even staff.

Calling the Open AI API is not free after an initial free allocation has expired! So, I’ve implemented a quota system to keep this under control, keep costs down and prevent abuse. The cost is not crazy with these small interactions, but it may add up if it gets popular. You can read more about OpenAI pricing on their pricing page.

In order to interact with the bot you must belong to a group that has been added to one of the three levels of trusted sets of groups, low, medium & high trust group sets. You can modify each of the number of allowed interactions per week per trusted group sets in the corresponding settings.

You must populate the groups too. That configuration is entirely up to you. They start out blank so initially no-one will have access to the bot:


In this example I’ve made staff have high trust access, whilst trust_level_0 have low trust. They get the corresponding quotas in three additional settings.

Note the user gets the quota based on the highest trusted group they are a member of.

“Prompt Engineering”

There are several locale text “settings” that influence what the bot receives and how the bot responds.

The most important one you should consider changing is the bot’s system prompt. This is sent every time you speak to the bot.

For the basic bot, you can try a system prompt like:

’You are an extreme Formula One fan, you love everything to do with motorsport and its high octane levels of excitement’ instead of the default.

(For the rag bot you must keep everything after “You are a helpful assistant.” or you may break the agent behaviour. Reset it if you run into problems. Again experiment!)

Try one that is most appropriate for the subject matter of your forum. Be creative!

Note that there are now two system prompts for each bot type. One .open is used when talking to the bot in “public”. The other .private is applied when talking to the bot in Personal Messages or Direct Message chat. This is so that you can customize private behaviour for e.g. a support bot.

Changing these locale strings can make the bot behave very differently but cannot be amended on the fly. I would recommend changing only the system prompt as the others play an important role in agent behaviour or providing information on who said what to the bot.

NB In Topics, the first Post and Topic Title are sent in addition to the window of Posts (determined by the lookback setting) to give the bot more context.

You can edit these strings in Admin → Customize → Text under chatbot.prompt., the most important of which are the system prompts which are in: chatbot.prompt.system.

Supports both Posts & Chat Messages!

The bot supports Chat Messages and Topic Posts, including Private Messages (if configured).

You can prompt the bot to respond by replying to it, or @ mentioning it. You can set how far the bot looks behind to get context for a response. The bigger the value the more costly will be each call.

There’s a floating quick chat button that connects you immediately to the bot. Its styling is a little experimental (modifying some z-index values of your base forum on mobile) and it may clash on some pages. This can be disabled in settings. You can choose whether to load the bot into a 1 to 1 chat or a Personal Message.


Now you can choose your preferred icon (default :robot: ) or if setting left blank, will pick up the bot user’s avatar! :sunglasses:

avatar: image OR icon: image

And remember, you can also customise the text that appears when it is expanded:


… using Admin → Customize → Text

(though you may need to customise the CSS a little to accommodate colours and sizing you want).

Uninstalling the plugin - Important!

Due to recent efforts to simplify the plugin, the only steps necessary to uninstall the plugin are now to remove the clone statement.

Thanks for your interest in the plugin!

Disclaimer: I’m not responsible for what the bot responds with. Consider the plugin to be at Beta stage and things could go wrong. It will improve with feedback. But not necessarily the bots response :rofl: Please understand the pro’s and con’s of a LLM and what they are and aren’t capable of and their limitations. They are very good at creating convincing text but can often be factually wrong.

Important Privacy Note: whatever you write on your forum may get forwarded to Open AI as part of the bots scan of the last few posts once it is prompted to reply (obviously this is restricted to the current Topic or Chat Channel). Whilst it almost certainly won’t be incorporated into their pre-trained models, they will use the data in their analytics and logging. Be sure to add this fact into your forum’s TOS & privacy statements. Related links: Terms of use, Privacy policy, https://platform.openai.com/docs/data-usage-policies

Copyright: Open AI made a statement about Copyright here: https://help.openai.com/en/articles/5008634-will-openai-claim-copyright-over-what-outputs-i-generate-with-the-api

TODO/Roadmap Items

  • Add front and back-end tests :construction:
  • Add “bot typing” indicator and “response streaming” (@Aizada_M, @MarcP) :construction:
  • forgot to mention the bot? Get bot to respond to edits that add its @ mention (@frold )
  • Add a badge? You did mention @botname (@frold )
  • Add setting to include Category and Pinned Posts prompt? (@Ed_S)
  • Ditto Bios to each message history prompt? (@Ed_S , @codergautam). Will this even work. Let’s get evidence.
  • Update Discourse Frotz with this better codebase?
  • Move to use pgvector in favour of pgembedding for vector search now that former supports fast HNSW lookup. :white_check_mark:
  • Add semantic search so that the bot can read your forum Posts and become an “expert” :wink: :white_check_mark:
  • Add agent behaviour to reduce hallucinations and leverage reliable, factual information. :white_check_mark:
  • Add extra logic to convert suspected usernames into @ mentions (@frold ) :white_check_mark:
  • Add GPT-4 support (when Open AI deems me worthy enough of access! :sweat_smile: ) :white_check_mark:
  • Add custom model name support. :white_check_mark:
  • Add option to strip out quotes from Posts before passing text to API. :white_check_mark:
  • Improve error transparency & handling for when Open AI returns an error state :white_check_mark:
  • Add retry capability for timed out API requests :white_check_mark:
  • Add support for ChatGPT :white_check_mark:
  • Lint the plugin to Discourse core standards :white_check_mark:
  • Add CI workflows :white_check_mark:
  • Add settings to influence the nature of the bots response (e.g. how wacky it is). :white_check_mark:
  • include Topic Title & first Posts to prompt :white_check_mark:
  • Add setting to switch from raw Post/Message data to cooked to potentially leverage web training data better (suggestion by @MarcP). NB May cost more and limit what is returned as input tokens are counted and cooked is much bigger. think we’ve abandoned this idea


*It still uses OpenAI’s chat GPT engine, but can now leverage local functions and data from API calls to limit hallucinations.


You may have seen the news from Open AI. If not, here it is:

Support for New Embeddings Model

I’ve added support for the new Embeddings model to Chatbot.

This is higher performance (improved retrieval) and a 1/5th of the price to use (but yeah, embeddings have never been the real issue cost wise, I admit).

  • supports the new, higher performance and cheaper(1/5th!), 3rd generation model text-embedding-3-small which conveniently has the same dimensions, so no changes to the DB are required.
  • introduces new setting chatbot_open_ai_embeddings_model which defaults to current model, so if you don’t want to change it, you don’t have to do anything.
  • requires refresh of all embeddings with:
    • ::DiscourseChatbot::PostEmbedding.delete_all on rails console
    • rake task rake chatbot:refresh_embeddings[1]*
    • you might have to reduce chatbot_forum_search_function_similarity_threshold in order to get results.
  • only relevant to those who use bot in “RAG” mode

NB I’m don’t believe this has been rolled out to Azure yet.

You must take action to use it, the new setting defaults to the old model.

How To Switch Embeddings model

  • Change the setting chatbot_open_ai_embeddings_model to your new preferred model
  • It’s best to first delete all your current embeddings:
    • go into the container ./launcher enter app
    • enter the rails console rails c
    • run ::DiscourseChatbot::PostEmbedding.delete_all
    • exit (to return to root within container)
  • run rake chatbot:refresh_embeddings[1]*
  • if for any Open AI side reason that fails part way through, run it again until you get to 100%
  • the new model is known to be more accurate, so you might have to drop chatbot_forum_search_function_similarity_threshold or you might get no results :). I had to drop mine from default value, 0.8, to 0.6, but your mileage may vary.

New GPT 3.5 and preview 4 Turbo Models

gpt-3.5-turbo-0125 was introduced. gpt-3.5-turbo in Chatbot settings will point to this automatically within 2 weeks, but if you can’t wait you can use the custom model settings (e.g. chatbot_open_ai_model_custom_name_medium_trust) to point to this new model now. GPT 3.5 turbo has halved in price for input tokens with this new model.

There’s a new alias called gpt-4-turbo-preview which will always point to the latest GPT 4 Turbo preview model and it will now point to the latest one which is gpt-4-0125-preview. I may add this to the default list for convenience, but use the custom name settings for now.

* this will run twice if left going which is a known issue. You can harmlessly stop it (Ctrl-Z) at any point during its second run and you will not cause any issue. Because it only fills in missing ones, there is no additional cost to this issue.

[1] pry(main)> rake chatbot:refresh_embeddings[1]
NameError: undefined local variable or method `refresh_embeddings' for main:Object
from (pry):1:in `__pry__'


you need to do that at the normal prompt, not the rails console :slight_smile:

Hence the exit

1 Like


Now I remember why it was so familiar… I did exactly same in the beginning, but back then I could realise it by myself.

1 Like

Were you able to create embeddings for your posts?

Mine aborts after ~1000 posts with this error

OpenAI HTTP Error (spotted in ruby-openai 6.3.1): {“error”=>{“message”=>“The server had an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID redacted in your message.)”, “type”=>“server_error”, “param”=>nil, “code”=>nil}}
rake aborted!

Same here, but after 20`ish. It is not the first time when OpenAI has problems. They aren’t exactly production ready, if ever.

Any chance you started observing

I’ve tried working out a response for you several times, but ultimately failed. Please contact the admin if this persists, thank you!

recently? I kept thinking there was some issue with chatbot settings, but I didn’t find any issues and now I’m thinking maybe it’s also on openAI end?

It gives error 500 as response and that comes from issue of target server. So?

I didn’t know this message corresponded to a 500 from openai and I haven’t seen any errors in console. Would you mind sharing how you got your confidence in that this is a 500 from openai?

Well, I don’t know it for sure, but:

OpenAI HTTP Error (spotted in ruby-openai 6.3.1): {"error"=>{"message"=>"The server had an error while processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID 6be311f9d72593c31861435b3b26b73c in your message.)", "type"=>"server_error", "param"=>nil, "code"=>nil}} rake aborted! Faraday::ServerError: the server responded with status 500 (Faraday::ServerError)

Looks like they have a production issue. 500 is a server error their side.

Just rerun the rake until you clear 100%.

If it’s any consolation I round their new model fell over several times when I tried it on Monday.

I guess I wasn’t clear enough, but

I’ve started seeing this response from the bot on forum when I tried to summon him to respond to some topics, i.e. this is completely unrelated to the error with rake on embeddings

No you weren’t, because:

But this is totally different incident.

That error is general one and means only that chatbot couldn’t get asked answer from OpenAI. For me it is error 400 (or was, it behaves really good after latest update from both side, plugin AND OpenAI and it happened almost same time). That may come some issues of prompt, json etc., but quite often it is matter of rate limiting.

And I would guess the most common reason for that came from user’s, aka. us, side when used too high token limit for answes on settings. I used 2000 limit because I needed, or so I reckoned, more longer answer. But when there is/was 4K limit from OpenAI it led to error 400 almost everytime.

Before latest update there was an issue on my forum, though. The chatbot got error 400 from every time on one category. But that was not issue of chatbor per se, but came from category lockdown plugin (I miss that plugin greatly, BTW).

Error 404 comes when using wrong model name :wink:

Logs of Discourse tells what is the real nature of an error and then dev forum of OpenAI and sometimes Google helps further.

(I might he too old, but I feel this whole GPT world is really confusing and very base of system is very… hallucinated thing)

Set verbose logs (bottom of.Chatbot settings) and look for actual call and open AI response. That will help you track down the actual issue.

  • 500 their problem
  • 404 yours, e.g. possibly custom URL issue if using Azure or proxy, incorrect azure deployment
  • 400 yours, e.g. incorrect model name, context too big.
  • 429 yours, you are exceeding your open ai accounts rate limits. Increase the bot response delay (Chatbot setting)
1 Like

Wrong model name gave 404 not-so-long-time-ago when I miswrote totally model name.

Not biggie. 400 can come from all sides — admin, plugin and OpenAI, but in most of times reason is an admin. 500 is always OpenAI, all others by user/admin/plugin depending several things.

1 Like

Anyway, bottom line: set chatbot enable verbose rails logging to true and as stuff happens, look at your logs :male_detective:


So I want to report something disappointing.

I’ve rolled back to ADA 2 from text-embedding-3-small, because I’ve been finding the search results have been less optimal.

My specific issue has been searching on version numbers.

You mileage may vary but interested in your experience.

I rolled back too but the reason (that may explain your experiences, or not) was token limits. If I had over 15000 chars in settings I never couldn’t generate embeddings because of that error 500. If I used 15000 generating stopped quite early by token rate. But if I use (or used, I don’t re-check it yet) old 002-model 25000 characters worked just fine.

So if I have to drop it close to 10000 it must have some effect.

Out there is too many bad experiences from those new ones.

1 Like

Tried generating embeddings with old model and 5k, 10k, 15k, 25k character limit – all give 500 after ~200 or so posts.