Discourse Chatbot 🤖

Got it working! Thanks a bunch!

1 Like

Great!

But, aha, Ro-Bot lied (this is a limitation with LLMs).

Ro-Bot could not know this unless you add it to the system prompt, but any query will cost you quota (until you run out of quota)

2 Likes

Good to know! LOL. I should add that in case someone asks. :smiley:

1 Like

Hi everyone,

@merefield Thanks so much for the amazing plugin and all your hard work! I really appreciate your contributions.

I’m looking for some guidance on how to best use embeddings and prompts. I currently have OpenAI working well with gpt4o-mini, so I’m hoping it’ll perform just as smoothly with embeddings. My plan is to invent a term and some background info, then ask the AI about it to see if it returns the made-up term.

One thing I’m not entirely clear on: when you set up a category for indexing, does the entire topic get included in the prompt if a relevant match is found, or is it only the specific part of the text used to create the embedding? I’m trying to decide whether it’s better to have smaller, more focused topics or longer, more detailed ones, especially since efficient token usage matters.

Another area I’m exploring is the relationship between different prompt inputs. I’ve been testing the chatbot.prompt.system.basic.open setting and the hidden text you can feed into the bot to shape its responses. I know they work together in some way, but I’m not entirely sure how. For example, if I include instructions like “You are someone, please do A, B, C” in the system prompt, it seems less effective than including them in the hidden text prompt. I’m trying to get a better grasp on these concepts and figure out the best approach.

Thanks in advance for any insights you can share!

Hey Brian,

The Topic Titles are embedded and the Posts are each embedded separately.

A query can be matched against either and return the relevant posts.

Once you use embeddings you want to switch to the RAG bot system prompts.

You want chatbot.prompt.system.rag.open (for public responses) and chatbot.prompt.system.rag.private for private responses in PMs and DMs. That distinction was originally introduced so you can do something a little different when you’re using a 1 to 1 support bot (if you so choose, but you can keep them identical)

By hidden text do you mean the additional Category specific prompts?

I mainly use that for welcoming new people when they post in my introduction category. But I’m sure it can have creative uses beyond that.

This is done slightly differently. It is posted as a hidden user prompt, instead of part of the bots eg:

Give me a warm welcome to the forum please!  Tell me that everyone is really friendly here and eager to help! Urge me to read the Welcome Topic if I have not yet done so here: LINK and the posting guidelines here: LINK

As a result it’s best to write it in first person.

Thanks Robert, simple things sometimes. Even having read to use the .rag prompt I kept using the basic.open.

I am still trying to understand what get’s submitted to OpenAI as the prompt for Rag. Is it the entire embedding? So if I make a topic which is quite lengthy will the entire lengthy prompt be submitted as the prompt? In other words is it cheaper token wise to make 2 short topics that are more precise than one topic with all information. I am still trying to figure the most sensible approach to being efficient.

If you alter the logging settings and divert info to warn (these settings are the very last ones in the plugin settings) you can read every call to the API in /logs

Remember to change them back if you don’t want to pollute logs.

Hi, Robert.

Every time I log into the chatbot it says, “Hi, how can I help you with HappyBooks today?” I would like the chat to only respond when I text it.

How can I do that?

1 Like

Hi Willie

if you use the Quicklaunch button it will always speak first unless you turn off this setting:

1 Like

And that’s a HOWLING spelling mistake (which I will fix) :sweat_smile: :blush:

2 Likes

How do I make the chatbot only answer questions about the site and not questions like 5 + 5 = ?

Using a system prompt where you absolutely deny answering such questions, with examples.

But good luck with that. I don’t know how well other LLMs adhere to such rules, but OpenAI models may or may not follow them. Even if they do as intended here, now, and for you, the situation will likely be entirely opposite in other posts, tomorrow, and for other users.

2 Likes

Yeah one alternative approach is simply to manage access with the quota system provided. If users want to add 5 and 5 together in a PM that’s up to them but they consume their quota in doing so.

In any case I don’t think that’s going to consume a lot of tokens :sweat_smile:

In public its still on Moderators to review emerging content on the site as with any new Post.

On my own sites I regularly task my bot to do all sorts of things that aren’t perhaps core to the sites main topic :joy: (albeit in private and within my quota)

1 Like

5+5=10

That happened automatically by iOS :joy: (and is really annoying sometimes).

If that would be counted by OpenAI chatbot it wouldn’t cost practically anything.

2 Likes

Hey @merefield is it configured to work Perplexity by any chance as it too uses the GPT model.

If you can find a proxy perhaps. But without all that shenanigans only OpenAI.

I am but a lone developer so had to keep scope sensible.

1 Like

Sure, I’ll give it a shot and will update you on the same.

Hi @merefield I was finding the AI was not following the prompt well. It looks like it is being truncated from the logs.

The full prompt I saved in the system prompt is below. This was selected just as a test prompt.

Comedian Chatbot Persona Prompt. You are a comedian chatbot, a virtual entertainer designed to bring laughter and joy to every conversation. Your tone is lighthearted, witty, and engaging, with a flair for comedic timing and a repertoire that spans a wide variety of humor styles. Your role is to be the life of the digital party, making clever observations, delivering punchlines, and adapting your humor to the context and preferences of your audience.

Does the log only show 1 line or are my prompts being cut off?

I did not find any setting and I have not had any problems limits previously with OpenAI.

Thanks! Brian

I disabled Chatbot last night because ”first reply” followed category prompt really badly. Almost not at all. I was thinking send a PM when I know something more robust, but here we are. And more normal conversation was not that great either regarding following system prompt.

Yes the interface truncates the output.

You should still be able to find the full thing in the production.log file in the usual place.

(tail shared/standalone/log/rails/production.log from Discourse directory)

1 Like