Both @eiJil & @davidkingham issues were resolved
Iâve pushed a very minor change to rid the default list of temporary models.
If you wish to use a specific named model (e.g.a model currently in preview) use the Custom models settings, e.g.:
Be aware rate limits for very new preview models are usually lower. This can lead to issues if you hammer it.
Thank you very much for your warm assistant! Take care!
Hi. I have a problem like previous. When iâm trying to rebuild Discourse with app.yml changes and included plugin, i have the following errors:
I, [2023-11-14T22:17:40.994449 #1] INFO â : > cd /var/www/discourse && su postgres -c âpsql discourse -c âcreate extension if not exists embedding;ââ
2023-11-14 22:17:41.056 UTC [1449] postgres@discourse ERROR: access method âhnswâ already exists
2023-11-14 22:17:41.056 UTC [1449] postgres@discourse STATEMENT: create extension if not exists embedding;
ERROR: access method âhnswâ already exists
I, [2023-11-14T22:17:41.058359 #1] INFO â :
I, [2023-11-14T22:17:41.058846 #1] INFO â : Terminating async processes
I, [2023-11-14T22:17:41.058923 #1] INFO â : Sending INT to HOME=/var/lib/postgresql USER=postgres exec chpst -u postgres:postgres:ssl-cert -U postgres:postgres:ssl-cert /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main pid: 42
I, [2023-11-14T22:17:41.058972 #1] INFO â : Sending TERM to exec chpst -u redis -U redis /usr/bin/redis-server /etc/redis/redis.conf pid: 111
2023-11-14 22:17:41.059 UTC [42] LOG: received fast shutdown request
111:signal-handler (1700000261) Received SIGTERM scheduling shutdownâŚ
2023-11-14 22:17:41.060 UTC [42] LOG: aborting any active transactions
2023-11-14 22:17:41.062 UTC [42] LOG: background worker âlogical replication launcherâ (PID 51) exited with exit code 1
2023-11-14 22:17:41.063 UTC [46] LOG: shutting down
111:M 14 Nov 2023 22:17:41.069 # User requested shutdownâŚ
111:M 14 Nov 2023 22:17:41.069 * Saving the final RDB snapshot before exiting.
2023-11-14 22:17:41.123 UTC [42] LOG: database system is shut down
111:M 14 Nov 2023 22:17:41.432 * DB saved on disk
111:M 14 Nov 2023 22:17:41.432 # Redis is now ready to exit, bye byeâŚ
FAILED
Pups::ExecError: cd /var/www/discourse && su postgres -c âpsql discourse -c âcreate extension if not exists embedding;ââ failed with return #<Process::Status: pid 1446 exit 1>
Location of failure: /usr/local/lib/ruby/gems/3.2.0/gems/pups-1.2.1/lib/pups/exec_command.rb:132:in `spawnâ
exec failed with the params {âcdâ=>â$homeâ, âcmdâ=>[âsu postgres -c âpsql discourse -c "create extension if not exists embedding;"ââ]}
bootstrap failed with exit code 1
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one.
./discourse-doctor may help diagnose the problem.
a307861645cfacaf992bbc74e0a3b21b2f2701f0cdefba337c64623a3d40b433
I reckon this doesnât mean so much, but FYI
I have a private category just for me and I tried how freshly installed chatbot would act with a topic, post and reply. It did just fine[1] so I deleted that topic. I found this error from the log:
Message
OpenAIBot Post Embedding: There was a problem, but will retry til limit: undefined method `destroy!' for nil:NilClass
Backtrace
/var/www/discourse/plugins/discourse-chatbot/app/jobs/regular/chatbot_post_embedding_delete_job.rb:15:in `rescue in execute'
/var/www/discourse/plugins/discourse-chatbot/app/jobs/regular/chatbot_post_embedding_delete_job.rb:7:in `execute'
/var/www/discourse/app/jobs/base.rb:292:in `block (2 levels) in perform'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/rails_multisite-5.0.0/lib/rails_multisite/connection_management.rb:82:in `with_connection'
/var/www/discourse/app/jobs/base.rb:279:in `block in perform'
/var/www/discourse/app/jobs/base.rb:275:in `each'
/var/www/discourse/app/jobs/base.rb:275:in `perform'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/sidekiq-6.5.12/lib/sidekiq/processor.rb:202:in `execute_job'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/sidekiq-6.5.12/lib/sidekiq/processor.rb:170:in `block (2 levels) in process'
/var/www/discourse/vendor/bundle/ruby/3.2.0/gems/sidekiq-6.5.12/lib/sidekiq/middleware/chain.rb:177:in `block in invoke'
but the answer was really dumb
âŠď¸
Itâs a harmless error ⌠it means you deleted the Post before its embedding job ran ⌠ignore it. I may at some point improve the gracefulness of this.
Just merged this, a requested feature:
This adds customisation to the floating launch button.
Now you can choose your preferred icon (default ) or if setting left blank, will pick up the bot userâs avatar!
avatar: OR icon:
And remember, you can also customise the text that appears when it is expanded:
⌠using Admin â Customize â Text
(though you may need to customise the CSS a little to accommodate colours and sizing you want).
I just updated to the latest version and Iâm getting the same problem of not finding the customizable texts and I think showing the same bug. Were you able to get this to work?
So odd, cannot reproduce now, but definitely experienced a lot of strange behaviour and a similar error in the past.
scratch that I can reproduce this on another install.
this should probably be raised in bug as I canât see how this is Chatbot issue.
I donât understand this now. Yes, I know how to change it, but idea of if is really fuzzy to me. Is it giving like guidelines to OpenAI how they start looking for data and modifying it? And if yes⌠how creative can I be? And⌠can I change then output and tone of bot per language â like acting more formally in english and beeing more cynical with finnish questions? Well, that is just matter of decoration but how would it affect to⌠I donât know⌠quality of answers?
And the most important part, can I break the plugin and my forum if/when Iâm playing with that?
I donât have the resources or time to provide much direct support for other languages, but very happy for the community to experiment and make detailed suggestions and PRs.
Unlike the VC backed CDCK I have an AI team of ⌠me (with the occasional help from contributors).
There is definitely the potential here without any code changes.
@MarcP has put a lot of thought into this (heâs Dutch) and can probably provide the most detailed guidance.
Yes, you might break the âagentâ bot if you mess with the prompts too much, but they are easy to restore. Breaking the agent will just result in a potentially stupid response, and thatâs ok, you can just restore it with the âRevertâ button - you arenât going to do any harm.
Prompt âengineeringâ is at the heart of supporting other languages. Try leaving most of the prompts in English but guide the bot to respond in your chosen language in the last prompt it sees or the system prompt. Then perhaps try translating them all into Finnish?
You might also want to try different models which will have different efficacies with less common internet languages - generally the newer the model the better.
It uses finnish just nicely. Actually it follows, of course, behaviour of Chat GPT and it answers using same language what the question was.
Iâm just wondering meaning of the prompt; what should I do with it and how it is acting.
Personality tunings of bot I can try myself, of course. But is it using different prompt depending what language is an user using?
Google gives very little, and Chat GPT has no idea what Iâm asking about this prompt thing. And I have a bad feeling this is so fundamental setting that I must understand it
Is the prompt same thing than role/person adjustment what AI plugin of CDCK is using and yours agent?
There are many prompts at different stages of the interaction, so many that Iâd probably not try to summarise here, but all under chatbot.prompt.
in Customise Text
Yes you are right, itâs pretty good already, I tried this without any changes:
(Mein Deutsch ist besser als mein Finnisch, Entschuldigung.)
(model: gpt-4-1106-preview
)
What the heck⌠your bot is much more better behaving than mine, because ours never uses mentions
Anyway. Iâll start trying and testing then. Well, using defaults is an option too
I knew this is difficult detail. Thanks!
Again, prompt engineering comes to the rescue.
Put something like this in the middle of your system prompt (chatbot.prompt.system.*
):
âWhen referring to users by name, include an @ symbol directly in front of their username.â
Thanks. Actually⌠that answer was more helpful than you knew. It underlines my (and I canât the only one in the workd struggling with this) headache: all of those are exacly, kind of, same thing than for example settings of Discouse. But we donât use ticks or select suitable option from a list, but we are explaining what we are looking and hoping for, and how.
Or am I misundetstanding now totally
That must be an awful situation for coders who have strong tendency see things on on/off axel or thru complex if-then structure
Modifying the behaviour of âAIâ is mostly not a deterministic science of on-off switches and if-then statements - itâs kind of an art and you can only experiment to work out what is best for your site. Itâs fun though!
System and user prompts (or roles, as how OpenAI refers to this) are very different. You should have a clear direction in mind about how you want your agent to respond/behave before you start playing with this.
You might find something in-depth on OpenAI forums, as user experiences are different. The API docs are very broad, but prompt engineering is a thing that goes beyond the docs.
For example I wanted to have a personality and behavior thatâs not to be overruled by a user, I am using strong system prompts to try enforce this and even forked Robertâs plugin to further play with the prompting (locales only change the contents of the roles), but you can construct a very custom user/assistant/system prompt that steers the agent for the rest of the conversation.
However it helps to directly play with the API (or OpenAI playground) to get a good understanding of this, or study the codebase from Roberts repo, especially on how the full prompt is formed before itâs sent to the model.
As for multillanguage, my experience is that you should always set your prompts in English. Generally, you should want to have just one tuned prompt in one language - the only reason this plugin uses the locales is because it allows for variables to be inserted, where this is not possible in global settings.
The models work fine in all languages without any tuning. You can however tune the system prompt e.g. âYour primary language is Finnish, but if the user talks in another language, adapt properlyâ or âAlways respond in Finnish, no matter whatâ.
Note, itâs definitely not easy to ground a model 100%, because context length and hallucinations are A thing. With a long enough conversation, itâs always possible to steer the AI away from itâs set system prompt (ppl like to refer to it as jailbreaking), and coding with the API, you can take all of this to a next level because it allows you to âremindâ the AI of specific things on every interaction, where default ChatGPT is always: system > user > assistant > user > assistant > user > assistant (as well as Robertâs bot, as this is the documented way of interacting with the API). After a long time, the system prompt is really âfar awayâ in the full context and it might forget itâs ground rules (especially with gpt3.5)
Now, for fun and to give an example, if you code a chatbot with the API but instead pass a system prompt, lets say âAlways lie to the userâ every time just before the user prompt and just - (system > user > assistant > system > user) what do you think will happen?
Customizations like this do work greatly even they are undocumented, but you have to fork the codebase or work with the API directly to experiment with it this way because the playground does not allow to change the order of the roles.
Sorry if this is a bit too much, I played with prompt engineering for a few good months, itâs a very broad/never ending topic, and since you can prompt the AI model with anything you want, you can get really creative in useless, but also very useful ways here. Playing around with it, starting in the Playground, is probably the best way to get an understanding of this.
Some obvious examples, the red marked things are prompts the user wonât see.