Discourse Chatbot 🤖 (Now smarter than ChatGPT!*)

Simple math, based on default chatbot settings and gpt-3.5-turbo model.

This means if you have 100 “average” high-TL users, who happens to be using their max chatbot requests every week, it would only cost you less than 20 dollars a month.

EDIT: Oops, I forgot to account for input tokens.


The dollar is worth 6 times more than the currency of my country, it is too expensive for me


Do you have any forum for me to see how this gpt chat works? just out of curiosity, I found it so revolutionary, I wanted to test it

Just try ChatGPT:


Or playground if you want to fine-tune. Keep in mind playground doesn’t have turbo-3.5 (yet?)

Pick up a token, it’s free for ~3 months and $18 of value. Then you can test out the plugin too.

1 Like

I think it’s best to keep the code as simple as possible, to make better enhancement in the feature without too much complications. Current behavior in topics already is like you describe, except it won’t reply to the first post automatically. However you can easily achieve this by adding a topic template for your ask-AIBot category mentioning @AIBot (and you can even add a default prompt to fine-tune the responses!!!)


I wonder if it would help to give the LLM a bit more context - perhaps Category description, or the content of a sticky post?


Yeah, I’ve already thought about the Title being important. We could ship that with every request too.

Good suggestion, I’ll roadmap it.


Also, it would be better if it had context of the usernames, it seems to think all the previous messages are mine (in a group chat with multiple participants).


Yeah, I have implemented that in AI Topic Summary (and it can work really well), but not here. Another candidate for a switch, maybe.

The distinction is made by user/assistant, but I’m not sure of the effect of adding in usernames and what format to do that with here … this needs experimentation I suspect.

Yes it will always be bot said this and user said this. The AI is designed for user-bot interactions. Not being in large groups (even though it can work with that). That being said, theoretically, it’s possible to feed {user} input with multiple names and messages… but would it give better responses?

The AI is as good as the input it gets. Simply adding some usernames is not something that could always improve the responses, it can also generate confused responses. And in general, the username of content is not relevant because the output is based on facts not opinions. So having some doubts on this.

@codergautam Why don’t you experiment on ChatGPT and give it the same input (post content of N posts + adding random usernames above each post?) to see how it will respond in different scenarios. Also I’m curious what responses you are getting that lead to your request?


Just a thought: add the user’s bio from their profile as part of the context. Even just “MarcP joined in October 2019 and has read 11 hours recently”

That is, put a cast of characters into the preamble.

1 Like

I hope your wallet is deep Ed. Every time you add data you are spending money. :). There would also be a token count challenge here at present limits.


Well I asked the AI to summarize what’s going on in this chat, and it thought that all the conversation was from me and it proceeded to give me a lecture about how inappropriate I am for doing this.

1 Like

Interesting. I have a similar case:

If there are multiple people posting and you ask the AI something like “Am I right?” Or “Summarize just my post” it wouldn’t know what’s yours. These are the two most simple prompts that are in the top of my head, there are probably tons of more scenarios where it would be interesting to know who posted what.

Just the adding usernames isn’t going to cut it tho. We should probably change the system role to something like “You are being a helpful assistant on a discussion board” to make it understand why there are random usernames in the context.

Interesting side thought, wondering how “You are being a helpful assistant on a Discourse powered discussion board.” system role would perform knowing the usernames in the context

I gave it a try, I just asked: “How to get TL3?”

This is the response:

To get TL3 or “Trust Level 3” in Discourse, you need to achieve the following requirements:

You need to be a member of the forum for at least 50 days.
You need to visit the forum regularly, at least 50% of the days within the last 50 days.
You need to read at least 25% of the total number of posts in all topics or at least 25% of the total number of topics.
You need to give at least 20 likes and receive at least 20 likes from others.
You need to participate actively in discussions by creating new topics, replying to others’ posts, and consistently give feedback to at least two or more topics each week.
You need to maintain a clean moderation record, or in other words, have not been silenced, warned, or suspended for violating community guidelines.
Once you have met those requirements, the system will automatically promote you to TL3, and you’ll enjoy additional features and perks on the forum.

Here is Rick Sanchez (from Rick & Morty, YT link here if you need some impression on how impressive this is) telling me how to edit my post:

Oh jeez, Morty. It’s easy, just look for the pencil icon or the “edit” button beneath your post. Then you can make any changes you need to, assuming you have permission to do so. Remember, though, if someone has already replied to your post, you might not be able to change certain parts of it without disrupting the conversation. Don’t go ruining the thread, Morty.

Hey brother, China can use ChatGPT (I’m from Chinese Mainland) to correct your mistakes

1 Like

This is a Chatbot. That’s another (potentially great) use case entirely but one I’m thinking about.

1 Like

Currently the bot follows the reply pointers in building the current history and will ignore messages it skips over. This was done to maintain and optimise a history of chat with the bot by each user individually. Therefore it might not see every message each time, even if you increase the history count. Happy to reconsider that behaviour if it turns out it can be significantly improved. But come back with a detailed argument please. You can probably simulate with current setup.

Note I’m not sure the chatgpt model is meant for “multiplayer” scenarios. So adding the usernames even might not work well. Please come back with detailed findings if possible We’ll have to experiment.

It is not designed for summarisation though. Consider using my other plugin for that. You can reuse the same token.


Thanks Marc really simple solution that achieved what I was looking for! I’ve just added @aibot into the template for the moment but agree there is an opportunity to include a default prompt to encourage an improved response.


So now this is merged:

The bot should now be a bit more knowledgeable as:

  • Usernames are added to all prompts in Topics and Message Channels so things should work better in circumstances where there are multiple users.
  • In Topics (chat turbo model only):
    • a prompt including the Title is included to help give the bot context
    • the OP is always included in the history
    • in both these cases, it doesn’t matter have far you are down the Topic, nor how short your look-back is, they are always included.
  • Prompts can be changed now, in Customize → Text under the keys chatbot.prompt.*. Be sure to keep the variables if required. You can strip out usernames if you want.
    • the system prompt to tell the bot how to behave in general, is now a localisation too and the setting has been removed.

@codergautam @Ed_S