This week’s AI conversations on Meta centered on making Discourse AI clearer to users and easier to operate at scale. On the product side, there was strong momentum to rename “AI Persona” to the more widely understood “AI Agent” (with translation workflow implications) in Renaming AI Persona → AI Agent and follow-ups like Renaming AI Persona → AI Agent. Admin experience also got attention: sites with AI disabled were still seeing AI dashboards/reports, which was confirmed as a bug and routed into broader reporting work in Don’t show AI reports if AI is not enabled and the related umbrella thread Admin Reporting & Analysis: Incremental Changes.
Theme component maintenance: AI Gists button compatibility with Modernized Foundation (#Theme-component, ai) Lilly refactored a theme component so formatting works with Modernized Foundation while keeping compatibility with the old Foundation theme in Discourse Topic Excerpts & AI Gists Button, tying into the broader theme effort in Modernizing the Foundation theme.
rburkej added a self-hosting perspective by asking for a detailed hardware profile and operational impact notes in Enabling AI search crippled my server, reinforcing that semantic search rollout needs clear sizing guidance.
Over the past week (2026-03-09 → 2026-03-16), Meta’s ai discussions clustered around product polish, reliability, and “real world” operations.
On the product side, Discourse moved closer to standardizing terminology by implementing the rename from AI Persona to AI Agent (Renaming AI Persona → AI Agent). On the infrastructure side, Discourse significantly expanded capacity for its hosted LLM offering—raising limits across all tiers and improving model quality and latency characteristics (Unlock All Discourse AI Features with Our Hosted LLM).
Meanwhile, operators focused on how AI fits into community rhythms: a request to delay AI Agent replies (so they feel less like a chatbot and more like a participant) surfaced both as a new Support topic (Adding a configurable delay to AI Agent responses) and as a follow-up in the longer-running “Agents” guide thread, where Discourse staff indicated that delayed responses would likely belong in a future automation overhaul rather than ai itself (AI bot - Agents).
Integration conversations had a notable bump too: Google’s Programmable Search / Custom Search constraints and deprecations are forcing a rethink of web search tooling, with Discourse exploring alternative providers and even “native search tools” from LLM vendors (Google Search for Discourse AI - Programmable Search Engine and Custom Search API). In parallel, community guides continued to expand around the Discourse MCP ecosystem, including a newly posted OpenCode CLI setup walkthrough (Discourse MCP Setup in OpenCode CLI).
“AI Persona” renamed to “AI Agent” (terminology + implementation) in #Featureai. Falco confirmed the rename work landed and pointed to the corresponding PR (Renaming AI Persona → AI Agent).
Hosted LLM capacity increased dramatically (plus model + performance upgrades) in #Announcementsai. Discourse reported higher plan limits across the board and described improvements like updated “state-of-the-art open weights” model, higher max tokens per request, better tokens/sec, and improved time-to-first-token (Unlock All Discourse AI Features with Our Hosted LLM).
Delaying AI Agent responses to match community pacing (1–4 hours) in Supportai. saurabhmithal asked for a configurable delay so Agents feel less like instant chatbots (Adding a configurable delay to AI Agent responses). In the broader “Agents” thread, Falco clarified it’s not possible today and positioned it as an automation capability, hinting at early planning for a major automation overhaul (AI bot - Agents); the original request is captured in-thread too (AI bot - Agents).
Discourse MCP setup guidance expands (OpenCode CLI) in usersaimcp. pacharanero posted a tested guide for installing Discourse MCP into OpenCode CLI, emphasizing that users can even point an LLM at the guide URL to help with setup (Discourse MCP Setup in OpenCode CLI). The guide cross-links to the Codex CLI variant for other MCP clients (Discourse MCP Setup in OpenAI Codex CLI).
Sentiment debugging details: “allowed internal hosts” and job execution in Supportai-sentiment. satonotdead reported resolving part of the issue by adding an internal IP to allowed internal hosts, then manually running Jobs::SentimentBackfill but still wanting complete historical backfill (Problems setting up Sentiment). Falco asked for clarification about whether at least the 60-day data appeared (Problems setting up Sentiment), and satonotdead confirmed dashboards work but the historical backfill is still the goal (Problems setting up Sentiment).
Whether Discourse should ship an official “OpenClaw skill” / agent that acts as a user (#Feature, ai)
In a Chinese-language feature thread, sniper756 proposed an OpenClaw-driven agent that can post/organize content on behalf of a user to build knowledge bases efficiently (Discourse官方会出个官方的openclaw skill么?). awesomerobot reiterated caution: Discourse is not enthusiastic about agents impersonating real users without an admin in the loop, pointing to the Meta ban policy context while leaving room for admin-support tooling (Discourse官方会出个官方的openclaw skill么?, openclaw plugin for discourse integration).
Copy/wording cleanup: duplicated “Default LLM” in an error string (ux, ai) Moin spotted an awkward repetition while looking at discourse_ai.ai_bot.agents.default_llm_required (“Default llm Default LLM…”) and reproduced it in the UI (Why is ‘Default LLM’ repeated…). awesomerobot confirmed it’s awkward and pointed to a fix in progress (Why is ‘Default LLM’ repeated…).
Prompting periodic AI summary reports to correctly capture vote counts (#Site_Management, automation, ai)
In the periodic summary reports how-to thread, julia1 asked how to craft a prompt so reports include the number of votes tied to feedback items (Discourse AI - Periodic summary reports).
MCP capability question: can MCP access PDF attachments in posts? (blog, ai, mcp)
In the Discourse MCP announcement thread, anaderi asked whether it’s possible for MCP to access PDF attachments uploaded to posts (Discourse MCP is here!).
Shauny
Proposed improving per-post translation ergonomics (especially on mobile) and suggested caching/saving translations to avoid repeated API calls (Translate post with AI and save translation). They clarified they didn’t want full auto-translate for the event—some friction was intentional—and instead wanted targeted per-post improvements (Translate post with AI and save translation).
Falco
Provided product-context on translation UX, noting the existing design expectation: enable automatic translation and let users toggle it in-topic (Translate post with AI and save translation). Separately, they helped diagnose tool-call reliability by stating Discourse AI’s read_timeout is 10 seconds and probing whether the external API exceeds that (Discourse ai 的工具调用超时如何解决?是否可以调整discourse超时时间,如何调整?).
sniper756
Started a feature request (in Chinese) asking if Discourse would ship an official OpenClaw “skill,” describing workflows where an AI agent performs forum operations and reposts authorized content into categorized/tagged knowledge bases (Discourse官方会出个官方的openclaw skill么?).
This week’s AI activity on Meta Discourse centered on making AI-powered localization more accurate and predictable, especially for small-but-important UI surfaces like tags and categories. Moin surfaced several “LLM-without-context” translation failures in AI-generated tag translations do not work perfectly, prompting nat to consider prompt improvements and adding extra grounding context such as tag descriptions (reply), while Falco explored tool-assisted approaches like letting the agent read relevant sources (idea, follow-up). Related “keep translations in sync” feedback also landed as feature requests for category name and category description updates (category names, category descriptions).
On the configuration side, a troubleshooting thread revealed confusion around what kinds of PMs get translated and how the UI communicates that. In Help me troubleshoot why AI is not translating PMs on my site, Moin clarified the current limitation (group PMs vs 1:1 PMs) (details), while Falco proposed a clearer multi-choice setting (proposal) and nat hinted that upcoming “translate these categories” controls could reshape the settings UX (plan).
Finally, there were incremental improvements and ecosystem enhancements: clearer messaging for semantic vs exact search results (search clarification), interest in refining AI persona behaviors to reduce noise (mention-only request), and continued adoption of AI summaries in UI via a theme component (feedback).
Interesting Topics
AI tag translations lack product/context grounding (and can get hilariously wrong) Moin documented how AI translations treat tags as isolated words, leading to incorrect or ambiguous results in AI-generated tag translations do not work perfectly. nat committed to improving prompts (response), and the discussion expanded into “how do we ground tag/category translations?” including feeding tag descriptions (idea), leveraging Crowdin glossary/localization choices (glossary suggestion), and giving the agent the ability to consult existing translations or sources (agent-access idea, code-grounding follow-up).
PM translation confusion: “100% translated” but nothing happens for 1:1 messages
In Help me troubleshoot why AI is not translating PMs on my site, tobiaseigen found PMs weren’t auto-translating despite the UI implying completion. Moin explained that current settings cover group PMs rather than direct 1:1 PMs (clarification), leading to a UX/settings rethink: Falco proposed replacing a boolean with explicit targets (settings proposal), and nat connected this to upcoming category-scoped translation controls (notes).
Feature request: keep translated category descriptions in sync with the “about” topic banner source
In Automatically update translated category descriptions, Moin highlighted a mismatch where the localized “about topic” translation updated, but the category banner description remained stale—suggesting translation sync should cascade to banner data.
Theme component: “AI summary in topic header” getting positive field feedback kaktak reported strong results with the component in AI summary in topic header (ai-summarize), signaling ongoing appetite for in-context AI summarization UI.
“Do we need an official OpenClaw skill?”—workarounds via scoped credentials
In Discourse官方会出个官方的openclaw skill么?, sniper756 concluded they could solve their integration need without a dedicated skill by provisioning a user with specific permissions and securely storing credentials.
Localization update: translated tags are now live (with a pointer to the main localization feature)
In the older feature thread Tags übersetzen, nat posted an update that tags are now translated, pointing readers to the main localization/translation feature announcement (related feature hub).
nat acknowledged the tag translation problems and committed to prompt improvements in AI-generated tag translations do not work perfectly, then explored more predictable grounding strategies like passing the tag description to the model (idea). In the PM translation discussion, they pointed to upcoming category-scoped translation controls and suggested the settings UI could be redesigned accordingly (reply). They also posted a status update that tag translation is now live in Tags übersetzen, referencing the broader localization initiative (content localization hub).
sniper756 closed the loop on an integration question by explaining they didn’t need an official skill after all, using a permission-scoped user credential approach in Discourse官方会出个官方的openclaw skill么?.
Thanks for reading, and I’ll see you again next week!
This week (2026-03-30 → 2026-04-06) on meta.discourse.org saw Discourse AI discussions cluster around three big themes:
MCP momentum and agent capabilities: Discourse AI doubled down on the Model Context Protocol with the announcement of client-side MCP support—letting Discourse AI agents call out to external MCP tool servers (Bring your own MCP!) and a full admin guide (AI Bot – Bring Your Own MCP Server). In parallel, the server-side MCP tooling kept evolving, including adding an edit tool so LLMs can update existing posts/wiki content via MCP (Discourse MCP is here!).
Moderation and privacy boundaries in AI automation: A practical moderation question—whether AI triage can scan private messages (DMs)—ended up being a UI/configuration gotcha rather than a hard limitation, and sparked follow-up ideas for clearer controls in the automation UI (Does AI triage automation scan DMs between regular users?, solution).
Model-specific quirks in localization and embeddings: Multiple threads highlighted that “AI features” are often “model behavior + integration details.” Translation issues ranged from German “AI commentary / thinking text” leakage that was fixed quickly (AI Commentary on German Translations) to missing images when translating via Mistral Small, which was mitigated by switching models (Images missing in translated posts when using Mistral as translation model). On the embeddings side, Mistral’s API mismatch (dimensions vs output_dimension) surfaced in configuration (Use Mistral for embeddings). There were also real-world admin bumps caused by deprecated Gemini model IDs in AI bot setups (Issue with AI bots forum bots).
Interesting Topics
Discourse AI agents can now connect to any MCP server (“Bring your own MCP”) (ai, #Announcements) sam announced that Discourse AI agents can register external MCP server URLs (GitHub, Notion, Linear, search providers, etc.) and then use discovered tools directly from the LLM agent (Bring your own MCP!). The companion how-to explains setup, tool discovery, and how this differs from JS-based custom tools (AI Bot – Bring Your Own MCP Server).
MCP usability: request for “remote/web MCP” + adding the ability to edit existing posts (ai, mcp, blog)
In ongoing MCP feedback, pacharanero explored how MCP could be made more accessible to non-CLI users via a web-published endpoint (Discourse MCP is here!). jrgong highlighted a KB/docs use-case needing edits to existing topics/posts (ref), and Falco confirmed an edit tool was added (“just update to latest”) (ref).
AI triage moderation + DM scanning: “Include personal messages” works, but ‘All topics’ caused confusion (automation, ai, Support) Denis_Kovalenko tested “Triage posts using AI” and found PMs between regular users weren’t being scanned (Does AI triage automation scan DMs between regular users?, test details). RGJ confirmed the PMs weren’t reaching audit logs and identified the workaround: leave “Topic Type” empty rather than “All topics” (ref). The fix worked immediately (ref), and the thread turned into a UX discussion about clearer options (ref, ref).
Translated German posts included “AI commentary/thought process” text—quickly fixed (ai, content-localization, bug, fixed) putty reported German translations leaking “thinking/translate” commentary into output (AI Commentary on German Translations). nat shipped an update to tighten formatting and cleaned up affected content (ref), with user confirmation afterward (ref).
Mistral translations dropped images in translated views (upload:// links), resolved by upgrading model (ai, content-localization, Support) Denis_Kovalenko found that switching the translation model from OpenAI to Mistral caused translated versions to render text but omit images (Images missing in translated posts when using Mistral as translation model, behavior details). RGJ suggested prompt hardening and/or trying a better model (ref), and switching from Mistral Small → Mistral Large fixed it (ref). Later, Falco asked for clarification on which “Mistral Small” was meant and recommended using stronger small-class models if needed (ref).
Embeddings with Mistral: OpenAI-compat config breaks on dimensions parameter naming (ai, #Feature) RGJ documented that configuring Mistral embeddings through an OpenAI-shaped integration fails if Discourse sends dimensions, because Mistral expects output_dimension (Use Mistral for embeddings). Removing the parameter makes the test succeed, suggesting a compatibility layer or provider-specific mapping may be needed (ref).
AI bot errors traced to deprecated Gemini model IDs + guidance for image generation models (ai, ai-bot, Support) ice.d ran into “Not found” errors with legacy bot configuration (Issue with AI bots forum bots). Lilly pointed out likely deprecation of gemini-2.5-flash-pre and suggested updating model URL/ID (including an image-capable option) (ref, config example), with NateDhaliwal sanity-checking whether any LLMs were configured (ref).
Should AI personas reply only to @mentions? Team leans toward workflows rather than niche toggles (ai, ai-bot, #Feature)
In an existing feature request, sam questioned whether “reply only to @mentions” is better as a default than as another setting (Allow AI Persona/Agent to respond only to @mentions…). Falco argued that edge cases are better served by upcoming project workflows—e.g., a mention-trigger workflow can handle the behavior without adding more switches (ref).
Agent response delay: workflows are expected to cover timing controls (ai, Support) sam noted that configurable delays for AI agent responses are the type of thing workflows should support, though not immediately; otherwise, the API path requires custom dev (Adding a configurable delay to AI Agent responses).
User-level control over AI (“disable AI nudges”) and PM translation settings migration (ai, ai-summarize, content-localization, ux/#Feature) paco argued that a per-user equivalent to discourse_ai_enabled could help people opt out of AI UI nudges without disabling AI site-wide (User Interface Preferences: include setting to disable AI nudges). Separately, translation settings changes continued to evolve around personal messages: nat linked a migration PR and described how prior “public content only” settings map into new category + PM targeting controls (AI translation of all PMs).
Activity
sam shipped MCP expansion and nudged future workflow-based automation: