sam
(Sam Saffron)
April 3, 2025, 3:56am
11
I have not posted in a while, despite daily visiting my little chat window and having it be helpful at least once or twice per day… consistently .
The reason for my delay here was that I had to work through this rather large change.
main
← better_upload_support
opened 07:22AM - 25 Mar 25 UTC
**1. What Led to the Change? (Problems with Previous Approach)**
* **Incons… istent Context Handling:** The previous system often passed context information (like `post_id`, `user`, `private_message`, `topic_id`, `custom_instructions`) around using plain Ruby hashes (`context: {}`). This approach lacked structure, was potentially error-prone (typos in keys), and made it harder to track what context was available or required in different parts of the AI Bot system (Tools, Personas, Bot logic). Accessing context often involved `context[:key]`.
* **Inflexible Image/Upload Handling:** Images associated with a user message were previously passed using a separate `upload_ids: [...]` array within the message hash. This made it difficult or impossible to represent prompts where text and images are interleaved naturally (e.g., "Describe this image {image1}, then compare it to this one {image2} and tell me the difference"). The LLM received the text and a list of associated image IDs, but not their precise relationship *within* the user's text flow.
* **Complex/Decentralized Context Building:** Logic for assembling conversation history (e.g., pulling previous posts/messages, handling custom prompts, associating uploads) was somewhat spread out, notably seen in the significant changes and removals within `lib/ai_bot/playground.rb` (specifically the `conversation_context` and `chat_context` logic being refactored).
**2. What New Support Does It Add? (Key Changes & Benefits)**
* **Introduction of `DiscourseAi::AiBot::BotContext`:**
* **What:** A dedicated class (`BotContext`) is introduced to encapsulate all contextual information for an AI Bot interaction. This includes messages, post/topic details, user information, site details (URL, title, description), time, participants, and control flags (like `skip_tool_details`).
* **Why:** Provides a structured, standardized, and object-oriented way to manage and pass context. This improves code readability, maintainability, and reduces the chance of errors compared to using unstructured hashes. Access changes from `context[:key]` to `context.key`.
* **Impact:** This class is now used consistently when initializing Tools (`Tool#initialize`), crafting prompts (`Persona#craft_prompt`), invoking the bot (`Bot#reply`), and within various helper methods, ensuring a uniform context object is available throughout the system.
* **Enhanced Multimodal Input (Inline Images/Uploads):**
* **What:** The format for representing user messages with uploads has fundamentally changed. Instead of a separate `upload_ids` array, uploads are now embedded directly *within* the `content` field, which becomes an array if uploads are present. Example: `content: ["Here is an image:", { upload_id: 123 }, "What do you see?"]`.
* **Why:** This allows for precise interleaving of text and visual elements within a single user turn. It's a much more natural way to represent multimodal prompts for vision-capable LLMs, enabling more complex instructions involving multiple images referenced at specific points in the text.
* **Impact:** Required changes across multiple components:
* **`Prompt` Class:** Logic for handling uploads (`encoded_uploads`, `encode_upload`, `content_with_encoded_uploads`, `text_only`) was refactored to support this new inline structure. Validation was updated.
* **LLM Dialects:** All relevant dialects (`ChatGpt`, `Claude`, `Gemini`, `Mistral`, `Nova`, `Ollama`, `OpenAiCompatible`) were updated to correctly parse the new `content` array format and translate it into the specific structure required by each respective LLM API (e.g., OpenAI's array of text/image_url objects, Gemini's parts array). A helper `to_encoded_content_array` was added to the base `Dialect` class.
* **Modules Using Vision:** Code that passes uploads to LLMs (e.g., `LlmTriage`, `Assistant`, `SpamScanner`, `Playground`) was updated to use the new `content` format.
* **Refactored Context Building:**
* **What:** Logic for building conversation history from posts or chat messages seems to be increasingly centralized in `DiscourseAi::Completions::PromptMessagesBuilder`. New methods like `messages_from_post` and `messages_from_chat` appear to encapsulate this logic.
* **Why:** Simplifies components like the `Playground` by abstracting away the details of fetching and formatting conversation history, including handling the new inline upload format.
* **Impact:** Significant simplification in `lib/ai_bot/playground.rb`, removing large chunks of previous context-building code.
It provides a subtle, yet critical, improvement to Discourse AI.
I was regularly noticing the moderation bot talk about completely irrelevant images, due to the way we constructed context. The change allows us to present mixed content (containing images and text in a correctly ordered fashion).
This means the LLM no longer gets confused.
What’s next?
We have no way in automation to let it call a rule after post editing has “settled”, llm calls can be expensive, just because people edit typos we don’t want to scan something over and over again. I am not sure if this is required here, but I would like to allow for the possibility of triggering an automation once a post settles into the new shape.
Prompt engineering - the current prompt is OK, but a bit too loud for my liking, it is bugging me a bit too much, I may soften it some
Improved context - one thing that really bugs me is that the automation is now has not awareness of user trust. Some users are far more trusted in a community than others (eg: moderators) I would like to see if we can improve this story.
Ability to run the automation on batches of posts for fast iterations.
I am sure a lot more will pop up.
6 Likes