@merefield - Fantastic work! Just amazing Thank you for this. In your opinion, would there be any practical use for me to install this on SWAPD?
And does it affect performance?
Hey David,
Treat it as highly experimental. It wouldnāt be integrated to specific workflows out of the box, but you might find it amusing and may find unexpected applications after testing it out.
The key thing is context. It reads the last x posts or messages and decides how to respond based on that and your final prompt. You can design a system prompt to make it behave āin characterā.
Not especially, thereās one lightweight job run for each response.
We are talking about 3 people (with aibot) on conversation. The bot could not send nothing until the others say hello and that will prevent the repeated messages but not limit the expected interaction.
Everyone could send 1on1 message to Aibot and thatās not supposed to be affected with this ajustment.
Iām asking about that because Iām not the only one using the forum and that will prevent a lot of unexpected behavior.
Itās simple for us just to wait the other guys write something before mentioning the bot but not everyone will do it
Sorry, Iām still confused with your proposal:
- If you mention the bot alone in a Topic or Message Channel, the bot will chat
- As soon as a second person has posted a message or Post, the bot will no longer respond unless targetted.
I deliberately designed it this way to cater for this exact issue.
Is this not a good enough solution?
-
If you talk alone with the bot the messages are repeated and not-sense. It seems to be related to OpenAI, not a big trouble because we can limit 1on1 chat and thatās all.
-
If you add someone to chat with AIBot in 3 group chat, the same happens if someone mention bot first.
That breaks the conversation and could be avoided. So AIBot waiting for two humans until send a reply could solve that specific issue as a workaround.
Nobody wants to talk now to the bot first in chat but that could be limited to avoid an unexpected behavior until the first issue is solved.
So why talk to the bot at all until the second person arrives and begins to chat? Isnāt that the solution?
And in any case the bot would stop replying as soon as the second person has posted a message.
Iām not sure I understand this
You canāt āaddā someone to personal chat presently in Discourse?
In main channels, e.g. General, the bot will not speak unless targetted. Iāve just checked and indeed it must be targetted from the word go, as this is classed as not a āDirect Messageā type channel.
Test case in point:
Bot didnāt reply as not targetted.
Once targetted it replies:
Also the bot is working fine here 1 to 1, so Iām not seeing your issue with 1 to 1 conversations with the bot in shared areas either?
We probably should take this conversation offline in private chat if you wish to elaborate further.
Ah @matenauta I see the problem.
The issue is not the intended rules, itās a bug.
Itās not working as intended on Direct Messages with two human participants. It is not currently following the rules I set out above. I will take a look.
OK thatās done.
The existing logic was correct, but the user count attribute of a channel is updated by a job and this job is not guaranteed to have run on a timely enough basis for use in this decision, so it is better to calculate it from first principles which Iāve now done.
Actual messages were not necessary for the bot to shut up, only the channel user membership count, but this was incorrect at time of checking, now fixed. Apologies for the confusion.
There might also be the option to run an LLM locally on your server but youād need a very expensive powerful server well above the minimum spec required to run the actual Discourse, turning your install basically into an LLM service with a Discourse tacked on the side ;). Thatās almost certainly going to cost more.
Iām afraid the only practical solution at present is paying for the API use once any free trial period has ended. Much like email.
Hi, Iām grateful for this great plugin, my friend. How can I customize the ChatGPT widget button?
Thank you for the great tip.
This came up before. You can try to ask the bot to finish its message or increase the token limit:
chatbot_max_response_tokens
Agree that the bot not working within this limit would appear to be an unhelpful limitation but I do not have control over that.
I have this error:
OpenAIBot: There was a problem: This modelās maximum context length is 4097 tokens. However, you requested 4224 tokens (2224 in the messages, 2000 in the completion). Please reduce the length of the messages or completion.
But not sure how to set length for GPT.
Do you have any instruction for this?
Can you just, read up one message and youāll find your answer???
You have too many characters in the Posts you are sending. Please reduce the look-back setting ( chatbot_max_look_behind
), that might help. Ultimately your Posts are getting a bit long!
My chatbot_max_look_behind
is 5
What should I set?
Actually, you need to reduce chatbot_max_response_tokens
.
('However, you requested 4224 tokens (2224 in the messages, 2000 in the completion). Please reduce the length of the messages or completion.
My chatbot_max_response_tokens
is 2000
And it still errors when I try setting 1500
What should I set with that error?