The bot should not reply unless mentioned or explicitly replied to so long as there is more than one human participant in the topic or channel. If there is only one human in the Topic or Channel, the bot, if already engaged, will assume the message is for it (pretty logical, no?).
There shouldnāt be any need to change that currently designed behaviour, unless we find a bug? I certainly donāt want to add any UI elements if at all possible.
Thatās a to do and reported above. Iāll get around to sorting that soon. I recently fixed a similar bug for the summary plugin. Should have added that to known issues. Only normal posts should be counted as a trigger or processed as replies.
Maybe thatās confusing the algorithm.
Can you be any more specific about the circumstances? That would definitely not be intended behaviour.
Ah ⦠I think I can explain this, sounds like an edge case (?):
Number of humanās counted only includes people who have actually posted messages/posts. I suggest (for now at least) a couple of workarounds:
not addressing the bot until 2nd user has posted if you donāt want the bot to keep talking. Bot should stop auto-responding as soon as second person actually takes part beyond just reading.
delete the unwanted bot responses.
Can you confirm the bot stops auto-responding once a second person has actually posted?
This behaviour is designed to make it easy to talk to the bot when no other person is actively involved. Iām not sure, on balance, how we could improve upon that?
As for repetition, thatās up to Open AIās model.
What about āwait until chat recieve two people repliesā? I donāt really know if there is a trigger like that is but I suppose that could be feasible.
I can confirm that multisites are working if we donāt used default categories to trust levels before.
And chatting alone with the bot is almost impossible because it just talks all the time and burn tokens without utility.
@merefield - Fantastic work! Just amazing Thank you for this. In your opinion, would there be any practical use for me to install this on SWAPD? And does it affect performance?
Treat it as highly experimental. It wouldnāt be integrated to specific workflows out of the box, but you might find it amusing and may find unexpected applications after testing it out.
The key thing is context. It reads the last x posts or messages and decides how to respond based on that and your final prompt. You can design a system prompt to make it behave āin characterā.
Not especially, thereās one lightweight job run for each response.
We are talking about 3 people (with aibot) on conversation. The bot could not send nothing until the others say hello and that will prevent the repeated messages but not limit the expected interaction.
Everyone could send 1on1 message to Aibot and thatās not supposed to be affected with this ajustment.
Iām asking about that because Iām not the only one using the forum and that will prevent a lot of unexpected behavior.
Itās simple for us just to wait the other guys write something before mentioning the bot but not everyone will do it
If you talk alone with the bot the messages are repeated and not-sense. It seems to be related to OpenAI, not a big trouble because we can limit 1on1 chat and thatās all.
If you add someone to chat with AIBot in 3 group chat, the same happens if someone mention bot first.
That breaks the conversation and could be avoided. So AIBot waiting for two humans until send a reply could solve that specific issue as a workaround.
Nobody wants to talk now to the bot first in chat but that could be limited to avoid an unexpected behavior until the first issue is solved.
So why talk to the bot at all until the second person arrives and begins to chat? Isnāt that the solution?
And in any case the bot would stop replying as soon as the second person has posted a message.
Iām not sure I understand this
You canāt āaddā someone to personal chat presently in Discourse?
In main channels, e.g. General, the bot will not speak unless targetted. Iāve just checked and indeed it must be targetted from the word go, as this is classed as not a āDirect Messageā type channel.
The issue is not the intended rules, itās a bug.
Itās not working as intended on Direct Messages with two human participants. It is not currently following the rules I set out above. I will take a look.
The existing logic was correct, but the user count attribute of a channel is updated by a job and this job is not guaranteed to have run on a timely enough basis for use in this decision, so it is better to calculate it from first principles which Iāve now done.
Actual messages were not necessary for the bot to shut up, only the channel user membership count, but this was incorrect at time of checking, now fixed. Apologies for the confusion.
There might also be the option to run an LLM locally on your server but youād need a very expensive powerful server well above the minimum spec required to run the actual Discourse, turning your install basically into an LLM service with a Discourse tacked on the side ;). Thatās almost certainly going to cost more.
Iām afraid the only practical solution at present is paying for the API use once any free trial period has ended. Much like email.