ملخص أسبوعي لمواضيع الذكاء الاصطناعي

Overview

This week’s AI conversations on Meta centered on making Discourse AI clearer to users and easier to operate at scale. On the product side, there was strong momentum to rename “AI Persona” to the more widely understood “AI Agent” (with translation workflow implications) in Renaming AI Persona → AI Agent and follow-ups like Renaming AI Persona → AI Agent. Admin experience also got attention: sites with AI disabled were still seeing AI dashboards/reports, which was confirmed as a bug and routed into broader reporting work in Don’t show AI reports if AI is not enabled and the related umbrella thread Admin Reporting & Analysis: Incremental Changes.

Operationally, the community dug into cost/performance controls and scaling pain points: Discourse rolled out OpenAI/Azure provider service tiers in Service tiers on Open AI providers, while a large self-hosted instance reported severe load when turning on semantic embeddings search in Enabling AI search crippled my server. There was also continued refinement around AI-assisted UX—especially where AI touches localization and editor UI—in Saving translations by AI-helper as content localization and The title suggester :star: button is placed outside of the title field when editing a translated title.

Finally, AI adjacent ecosystem work continued with MCP tooling: a practical setup guide landed for Codex CLI in Discourse MCP Setup in OpenAI Codex CLI, and was cross-linked back into the canonical announcement thread Discourse MCP is here!.


Interesting Topics


Activity

In total (last 7 days): 6 new topics and 25 posts, with the heaviest engagement around naming/UX polish and practical scaling/cost controls for embeddings and OpenAI usage—see Renaming AI Persona → AI Agent, Service tiers on Open AI providers, and Enabling AI search crippled my server.

Thanks for reading, and I’ll see you again next week! :slight_smile:

Overview

Over the past week (2026-03-09 → 2026-03-16), Meta’s ai discussions clustered around product polish, reliability, and “real world” operations.

On the product side, Discourse moved closer to standardizing terminology by implementing the rename from AI Persona to AI Agent (Renaming AI Persona → AI Agent). On the infrastructure side, Discourse significantly expanded capacity for its hosted LLM offering—raising limits across all tiers and improving model quality and latency characteristics (Unlock All Discourse AI Features with Our Hosted LLM).

Meanwhile, operators focused on how AI fits into community rhythms: a request to delay AI Agent replies (so they feel less like a chatbot and more like a participant) surfaced both as a new Support topic (Adding a configurable delay to AI Agent responses) and as a follow-up in the longer-running “Agents” guide thread, where Discourse staff indicated that delayed responses would likely belong in a future automation overhaul rather than ai itself (AI bot - Agents).

Integration conversations had a notable bump too: Google’s Programmable Search / Custom Search constraints and deprecations are forcing a rethink of web search tooling, with Discourse exploring alternative providers and even “native search tools” from LLM vendors (Google Search for Discourse AI - Programmable Search Engine and Custom Search API). In parallel, community guides continued to expand around the Discourse MCP ecosystem, including a newly posted OpenCode CLI setup walkthrough (Discourse MCP Setup in OpenCode CLI).

Finally, practical admin workflows came up repeatedly: improving observability for AI spam detection via direct database queries (Discourse AI - Spam detection), questions about sentiment analysis backfilling and debugging (Problems setting up Sentiment), and GDPR-oriented concerns about sentiment processing depending on provider/configuration (Introducing Discourse AI Sentiment Analysis: New Admin Report Available). There was also an open Support thread (in Chinese) on tool-call timeouts, still in the “need more details” stage (Discourse ai 的工具调用超时如何解决?是否可以调整discourse超时时间,如何调整?).


Interesting Topics


Activity


Thanks for reading, and I’ll see you again next week! :slight_smile:

Weekly AI Summary for meta.discourse.org (2026-03-16 → 2026-03-23)

Overview

AI discussions this week clustered around practical UX and cost-control improvements, especially for translation workflows and summarization placement. On the translation side, Shauny proposed a smoother per-post “translate” affordance plus a way to save/cached translated output to avoid repeated API spend (Translate post with AI and save translation), with Moin linking the idea to earlier localization thinking (Translate post with AI and save translation, Saving translations by AI Helper as content localization).

On the summarization UI front, Ivan_Rapekas shipped a theme component that adds the AI summary action into the topic header / sidebar timeline area, and tied it back to longstanding requests about summary button placement (AI summary in topic header, Feedback: Move summarize button at the top of the topic, Summarize button placement on mobile views).

Several threads focused on polish and reliability in AI admin settings: wording glitches like the repeated “Default LLM” error label were acknowledged and queued for fixing (Why is ‘Default LLM’ repeated…, Why is ‘Default LLM’ repeated…), and i18n layout issues in the LLM cost configuration UI (German) continued to be refined (Field alignment issues… in German, Field alignment issues… in German).

Meanwhile, the community revisited agent safety boundaries (notably concerns around AI acting “as a user” without admin oversight) (Discourse官方会出个官方的openclaw skill么?, openclaw plugin for discourse integration), and tackled integration constraints like tool-calling timeouts and connecting Discourse AI to self-hosted RAG/knowledge bases (Discourse ai 的工具调用超时如何解决?, Discourse ai 如何引入自建知识库RAG?). There was also a small but notable question about whether Discourse MCP can access PDF attachments via the protocol (Discourse MCP is here!).


Interesting Topics


Activity


Thanks for reading, and I’ll see you again next week! :slight_smile:

Overview

This week’s AI activity on Meta Discourse centered on making AI-powered localization more accurate and predictable, especially for small-but-important UI surfaces like tags and categories. Moin surfaced several “LLM-without-context” translation failures in AI-generated tag translations do not work perfectly, prompting nat to consider prompt improvements and adding extra grounding context such as tag descriptions (reply), while Falco explored tool-assisted approaches like letting the agent read relevant sources (idea, follow-up). Related “keep translations in sync” feedback also landed as feature requests for category name and category description updates (category names, category descriptions).

On the configuration side, a troubleshooting thread revealed confusion around what kinds of PMs get translated and how the UI communicates that. In Help me troubleshoot why AI is not translating PMs on my site, Moin clarified the current limitation (group PMs vs 1:1 PMs) (details), while Falco proposed a clearer multi-choice setting (proposal) and nat hinted that upcoming “translate these categories” controls could reshape the settings UX (plan).

Finally, there were incremental improvements and ecosystem enhancements: clearer messaging for semantic vs exact search results (search clarification), interest in refining AI persona behaviors to reduce noise (mention-only request), and continued adoption of AI summaries in UI via a theme component (feedback).


Interesting Topics


Activity


Thanks for reading, and I’ll see you again next week! :slight_smile:

Overview

This week (2026-03-30 → 2026-04-06) on meta.discourse.org saw Discourse AI discussions cluster around three big themes:

  1. MCP momentum and agent capabilities: Discourse AI doubled down on the Model Context Protocol with the announcement of client-side MCP support—letting Discourse AI agents call out to external MCP tool servers (Bring your own MCP!) and a full admin guide (AI Bot – Bring Your Own MCP Server). In parallel, the server-side MCP tooling kept evolving, including adding an edit tool so LLMs can update existing posts/wiki content via MCP (Discourse MCP is here!).

  2. Moderation and privacy boundaries in AI automation: A practical moderation question—whether AI triage can scan private messages (DMs)—ended up being a UI/configuration gotcha rather than a hard limitation, and sparked follow-up ideas for clearer controls in the automation UI (Does AI triage automation scan DMs between regular users?, solution).

  3. Model-specific quirks in localization and embeddings: Multiple threads highlighted that “AI features” are often “model behavior + integration details.” Translation issues ranged from German “AI commentary / thinking text” leakage that was fixed quickly (AI Commentary on German Translations) to missing images when translating via Mistral Small, which was mitigated by switching models (Images missing in translated posts when using Mistral as translation model). On the embeddings side, Mistral’s API mismatch (dimensions vs output_dimension) surfaced in configuration (Use Mistral for embeddings). There were also real-world admin bumps caused by deprecated Gemini model IDs in AI bot setups (Issue with AI bots forum bots).


Interesting Topics

  • Discourse AI agents can now connect to any MCP server (“Bring your own MCP”) (ai, #Announcements)
    sam announced that Discourse AI agents can register external MCP server URLs (GitHub, Notion, Linear, search providers, etc.) and then use discovered tools directly from the LLM agent (Bring your own MCP!). The companion how-to explains setup, tool discovery, and how this differs from JS-based custom tools (AI Bot – Bring Your Own MCP Server).

  • MCP usability: request for “remote/web MCP” + adding the ability to edit existing posts (ai, mcp, blog)
    In ongoing MCP feedback, pacharanero explored how MCP could be made more accessible to non-CLI users via a web-published endpoint (Discourse MCP is here!). jrgong highlighted a KB/docs use-case needing edits to existing topics/posts (ref), and Falco confirmed an edit tool was added (“just update to latest”) (ref).

  • AI triage moderation + DM scanning: “Include personal messages” works, but ‘All topics’ caused confusion (automation, ai, Support)
    Denis_Kovalenko tested “Triage posts using AI” and found PMs between regular users weren’t being scanned (Does AI triage automation scan DMs between regular users?, test details). RGJ confirmed the PMs weren’t reaching audit logs and identified the workaround: leave “Topic Type” empty rather than “All topics” (ref). The fix worked immediately (ref), and the thread turned into a UX discussion about clearer options (ref, ref).

  • Translated German posts included “AI commentary/thought process” text—quickly fixed (ai, content-localization, bug, fixed)
    putty reported German translations leaking “thinking/translate” commentary into output (AI Commentary on German Translations). nat shipped an update to tighten formatting and cleaned up affected content (ref), with user confirmation afterward (ref).

  • Mistral translations dropped images in translated views (upload:// links), resolved by upgrading model (ai, content-localization, Support)
    Denis_Kovalenko found that switching the translation model from OpenAI to Mistral caused translated versions to render text but omit images (Images missing in translated posts when using Mistral as translation model, behavior details). RGJ suggested prompt hardening and/or trying a better model (ref), and switching from Mistral Small → Mistral Large fixed it (ref). Later, Falco asked for clarification on which “Mistral Small” was meant and recommended using stronger small-class models if needed (ref).

  • Embeddings with Mistral: OpenAI-compat config breaks on dimensions parameter naming (ai, #Feature)
    RGJ documented that configuring Mistral embeddings through an OpenAI-shaped integration fails if Discourse sends dimensions, because Mistral expects output_dimension (Use Mistral for embeddings). Removing the parameter makes the test succeed, suggesting a compatibility layer or provider-specific mapping may be needed (ref).

  • AI bot errors traced to deprecated Gemini model IDs + guidance for image generation models (ai, ai-bot, Support)
    ice.d ran into “Not found” errors with legacy bot configuration (Issue with AI bots forum bots). Lilly pointed out likely deprecation of gemini-2.5-flash-pre and suggested updating model URL/ID (including an image-capable option) (ref, config example), with NateDhaliwal sanity-checking whether any LLMs were configured (ref).

  • Should AI personas reply only to @mentions? Team leans toward workflows rather than niche toggles (ai, ai-bot, #Feature)
    In an existing feature request, sam questioned whether “reply only to @mentions” is better as a default than as another setting (Allow AI Persona/Agent to respond only to @mentions…). Falco argued that edge cases are better served by upcoming project workflows—e.g., a mention-trigger workflow can handle the behavior without adding more switches (ref).

  • Agent response delay: workflows are expected to cover timing controls (ai, Support)
    sam noted that configurable delays for AI agent responses are the type of thing workflows should support, though not immediately; otherwise, the API path requires custom dev (Adding a configurable delay to AI Agent responses).

  • User-level control over AI (“disable AI nudges”) and PM translation settings migration (ai, ai-summarize, content-localization, ux/#Feature)
    paco argued that a per-user equivalent to discourse_ai_enabled could help people opt out of AI UI nudges without disabling AI site-wide (User Interface Preferences: include setting to disable AI nudges). Separately, translation settings changes continued to evolve around personal messages: nat linked a migration PR and described how prior “public content only” settings map into new category + PM targeting controls (AI translation of all PMs).


Activity

Thanks for reading, and I’ll see you again next week! :slight_smile: