Discourse Chatbot 🤖 (Now smarter than ChatGPT!*)

Both @eiJil & @davidkingham issues were resolved :+1:

1 Like

I’ve pushed a very minor change to rid the default list of temporary models.

If you wish to use a specific named model (e.g.a model currently in preview) use the Custom models settings, e.g.:

Be aware rate limits for very new preview models are usually lower. This can lead to issues if you hammer it.

Thank you very much for your warm assistant! Take care!

1 Like

Hi. I have a problem like previous. When i’m trying to rebuild Discourse with app.yml changes and included plugin, i have the following errors:

I, [2023-11-14T22:17:40.994449 #1] INFO – : > cd /var/www/discourse && su postgres -c ‘psql discourse -c “create extension if not exists embedding;”’
2023-11-14 22:17:41.056 UTC [1449] postgres@discourse ERROR: access method “hnsw” already exists
2023-11-14 22:17:41.056 UTC [1449] postgres@discourse STATEMENT: create extension if not exists embedding;
ERROR: access method “hnsw” already exists
I, [2023-11-14T22:17:41.058359 #1] INFO – :
I, [2023-11-14T22:17:41.058846 #1] INFO – : Terminating async processes
I, [2023-11-14T22:17:41.058923 #1] INFO – : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 42
I, [2023-11-14T22:17:41.058972 #1] INFO – : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 111
2023-11-14 22:17:41.059 UTC [42] LOG: received fast shutdown request
111:signal-handler (1700000261) Received SIGTERM scheduling shutdown…
2023-11-14 22:17:41.060 UTC [42] LOG: aborting any active transactions
2023-11-14 22:17:41.062 UTC [42] LOG: background worker “logical replication launcher” (PID 51) exited with exit code 1
2023-11-14 22:17:41.063 UTC [46] LOG: shutting down
111:M 14 Nov 2023 22:17:41.069 # User requested shutdown…
111:M 14 Nov 2023 22:17:41.069 * Saving the final RDB snapshot before exiting.
2023-11-14 22:17:41.123 UTC [42] LOG: database system is shut down
111:M 14 Nov 2023 22:17:41.432 * DB saved on disk
111:M 14 Nov 2023 22:17:41.432 # Redis is now ready to exit, bye bye…

FAILED

Pups::ExecError: cd /var/www/discourse && su postgres -c ‘psql discourse -c “create extension if not exists embedding;”’ failed with return #<Process::Status: pid 1446 exit 1>
Location of failure: /usr/local/lib/ruby/gems/3.2.0/gems/pups-1.2.1/lib/pups/exec_command.rb:132:in `spawn’
exec failed with the params {“cd”=>“$home”, “cmd”=>[“su postgres -c ‘psql discourse -c "create extension if not exists embedding;"’”]}
bootstrap failed with exit code 1
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.
a307861645cfacaf992bbc74e0a3b21b2f2701f0cdefba337c64623a3d40b433

I reckon this doesn’t mean so much, but FYI

I have a private category just for me and I tried how freshly installed chatbot would act with a topic, post and reply. It did just fine[1] so I deleted that topic. I found this error from the log:

Message

OpenAIBot Post Embedding: There was a problem, but will retry til limit: undefined method `destroy!' for nil:NilClass

Backtrace

/var/www/discourse/plugins/discourse-chatbot/app/jobs/regular/chatbot_post_embedding_delete_job.rb:15:in `rescue in execute'
/var/www/discourse/plugins/discourse-chatbot/app/jobs/regular/chatbot_post_embedding_delete_job.rb:7:in `execute'
/var/www/discourse/app/jobs/base.rb:292:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-5.0.0/lib/rails_multisite/connection_management.rb:82:in `with_connection'
/var/www/discourse/app/jobs/base.rb:279:in `block in perform'
/var/www/discourse/app/jobs/base.rb:275:in `each'
/var/www/discourse/app/jobs/base.rb:275:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/sidekiq-6.5.12/lib/sidekiq/processor.rb:202:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/sidekiq-6.5.12/lib/sidekiq/processor.rb:170:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:177:in `block in invoke'

  1. but the answer was really dumb :flushed: ↩︎

1 Like

It’s a harmless error … it means you deleted the Post before its embedding job ran … ignore it. I may at some point improve the gracefulness of this.

1 Like

Just merged this, a requested feature:

This adds customisation to the floating launch button.

Now you can choose your preferred icon (default :robot: ) or if setting left blank, will pick up the bot user’s avatar! :sunglasses:

avatar: image OR icon: image

And remember, you can also customise the text that appears when it is expanded:

image

… using Admin → Customize → Text

(though you may need to customise the CSS a little to accommodate colours and sizing you want).

2 Likes

I just updated to the latest version and I’m getting the same problem of not finding the customizable texts and I think showing the same bug. Were you able to get this to work?

1 Like

So odd, cannot reproduce now, but definitely experienced a lot of strange behaviour and a similar error in the past.

1 Like

scratch that I can reproduce this on another install.

this should probably be raised in bug as I can’t see how this is Chatbot issue.

1 Like

I don’t understand this now. Yes, I know how to change it, but idea of if is really fuzzy to me. Is it giving like guidelines to OpenAI how they start looking for data and modifying it? And if yes… how creative can I be? And… can I change then output and tone of bot per language — like acting more formally in english and beeing more cynical with finnish questions? Well, that is just matter of decoration but how would it affect to… I don’t know… quality of answers?

And the most important part, can I break the plugin and my forum if/when I’m playing with that?

1 Like

I don’t have the resources or time to provide much direct support for other languages, but very happy for the community to experiment and make detailed suggestions and PRs.

Unlike the VC backed CDCK I have an AI team of … me (with the occasional help from contributors).

There is definitely the potential here without any code changes.

@MarcP has put a lot of thought into this (he’s Dutch) and can probably provide the most detailed guidance.

Yes, you might break the “agent” bot if you mess with the prompts too much, but they are easy to restore. Breaking the agent will just result in a potentially stupid response, and that’s ok, you can just restore it with the “Revert” button - you aren’t going to do any harm.

Prompt “engineering” is at the heart of supporting other languages. Try leaving most of the prompts in English but guide the bot to respond in your chosen language in the last prompt it sees or the system prompt. Then perhaps try translating them all into Finnish?

You might also want to try different models which will have different efficacies with less common internet languages - generally the newer the model the better.

1 Like

It uses finnish just nicely. Actually it follows, of course, behaviour of Chat GPT and it answers using same language what the question was.

I’m just wondering meaning of the prompt; what should I do with it and how it is acting.

Personality tunings of bot I can try myself, of course. But is it using different prompt depending what language is an user using?

Google gives very little, and Chat GPT has no idea what I’m asking about this prompt thing. And I have a bad feeling this is so fundamental setting that I must understand it :woozy_face:

Is the prompt same thing than role/person adjustment what AI plugin of CDCK is using and yours agent?

There are many prompts at different stages of the interaction, so many that I’d probably not try to summarise here, but all under chatbot.prompt. in Customise Text

Yes you are right, it’s pretty good already, I tried this without any changes:

(Mein Deutsch ist besser als mein Finnisch, Entschuldigung.)

(model: gpt-4-1106-preview)

1 Like

What the heck… your bot is much more better behaving than mine, because ours never uses mentions :rofl:

Anyway. I’ll start trying and testing then. Well, using defaults is an option too :smirk:

I knew this is difficult detail. Thanks!

Again, prompt engineering comes to the rescue.

Put something like this in the middle of your system prompt (chatbot.prompt.system.*):

“When referring to users by name, include an @ symbol directly in front of their username.”

1 Like

Thanks. Actually… that answer was more helpful than you knew. It underlines my (and I can’t the only one in the workd struggling with this) headache: all of those are exacly, kind of, same thing than for example settings of Discouse. But we don’t use ticks or select suitable option from a list, but we are explaining what we are looking and hoping for, and how.

Or am I misundetstanding now totally :flushed:

That must be an awful situation for coders who have strong tendency see things on on/off axel or thru complex if-then structure :rofl:

1 Like

Modifying the behaviour of “AI” is mostly not a deterministic science of on-off switches and if-then statements - it’s kind of an art and you can only experiment to work out what is best for your site. It’s fun though! :ping_pong:

1 Like

@lubezniy 's issue was solved btw.

System and user prompts (or roles, as how OpenAI refers to this) are very different. You should have a clear direction in mind about how you want your agent to respond/behave before you start playing with this.

You might find something in-depth on OpenAI forums, as user experiences are different. The API docs are very broad, but prompt engineering is a thing that goes beyond the docs.

For example I wanted to have a personality and behavior that’s not to be overruled by a user, I am using strong system prompts to try enforce this and even forked Robert’s plugin to further play with the prompting (locales only change the contents of the roles), but you can construct a very custom user/assistant/system prompt that steers the agent for the rest of the conversation.

However it helps to directly play with the API (or OpenAI playground) to get a good understanding of this, or study the codebase from Roberts repo, especially on how the full prompt is formed before it’s sent to the model.

As for multillanguage, my experience is that you should always set your prompts in English. Generally, you should want to have just one tuned prompt in one language - the only reason this plugin uses the locales is because it allows for variables to be inserted, where this is not possible in global settings.

The models work fine in all languages without any tuning. You can however tune the system prompt e.g. “Your primary language is Finnish, but if the user talks in another language, adapt properly” or “Always respond in Finnish, no matter what”.

Note, it’s definitely not easy to ground a model 100%, because context length and hallucinations are A thing. With a long enough conversation, it’s always possible to steer the AI away from it’s set system prompt (ppl like to refer to it as jailbreaking), and coding with the API, you can take all of this to a next level because it allows you to “remind” the AI of specific things on every interaction, where default ChatGPT is always: system > user > assistant > user > assistant > user > assistant (as well as Robert’s bot, as this is the documented way of interacting with the API). After a long time, the system prompt is really “far away” in the full context and it might forget it’s ground rules (especially with gpt3.5)

Now, for fun and to give an example, if you code a chatbot with the API but instead pass a system prompt, lets say “Always lie to the user” every time just before the user prompt and just - (system > user > assistant > system > user) what do you think will happen?

Customizations like this do work greatly even they are undocumented, but you have to fork the codebase or work with the API directly to experiment with it this way because the playground does not allow to change the order of the roles.

Sorry if this is a bit too much, I played with prompt engineering for a few good months, it’s a very broad/never ending topic, and since you can prompt the AI model with anything you want, you can get really creative in useless, but also very useful ways here. Playing around with it, starting in the Playground, is probably the best way to get an understanding of this.

Some obvious examples, the red marked things are prompts the user won’t see.

3 Likes