This week’s AI conversations on Meta centered on making Discourse AI clearer to users and easier to operate at scale. On the product side, there was strong momentum to rename “AI Persona” to the more widely understood “AI Agent” (with translation workflow implications) in Renaming AI Persona → AI Agent and follow-ups like Renaming AI Persona → AI Agent. Admin experience also got attention: sites with AI disabled were still seeing AI dashboards/reports, which was confirmed as a bug and routed into broader reporting work in Don’t show AI reports if AI is not enabled and the related umbrella thread Admin Reporting & Analysis: Incremental Changes.
Theme component maintenance: AI Gists button compatibility with Modernized Foundation (#Theme-component, ai) Lilly refactored a theme component so formatting works with Modernized Foundation while keeping compatibility with the old Foundation theme in Discourse Topic Excerpts & AI Gists Button, tying into the broader theme effort in Modernizing the Foundation theme.
rburkej added a self-hosting perspective by asking for a detailed hardware profile and operational impact notes in Enabling AI search crippled my server, reinforcing that semantic search rollout needs clear sizing guidance.
Over the past week (2026-03-09 → 2026-03-16), Meta’s ai discussions clustered around product polish, reliability, and “real world” operations.
On the product side, Discourse moved closer to standardizing terminology by implementing the rename from AI Persona to AI Agent (Renaming AI Persona → AI Agent). On the infrastructure side, Discourse significantly expanded capacity for its hosted LLM offering—raising limits across all tiers and improving model quality and latency characteristics (Unlock All Discourse AI Features with Our Hosted LLM).
Meanwhile, operators focused on how AI fits into community rhythms: a request to delay AI Agent replies (so they feel less like a chatbot and more like a participant) surfaced both as a new Support topic (Adding a configurable delay to AI Agent responses) and as a follow-up in the longer-running “Agents” guide thread, where Discourse staff indicated that delayed responses would likely belong in a future automation overhaul rather than ai itself (AI bot - Agents).
Integration conversations had a notable bump too: Google’s Programmable Search / Custom Search constraints and deprecations are forcing a rethink of web search tooling, with Discourse exploring alternative providers and even “native search tools” from LLM vendors (Google Search for Discourse AI - Programmable Search Engine and Custom Search API). In parallel, community guides continued to expand around the Discourse MCP ecosystem, including a newly posted OpenCode CLI setup walkthrough (Discourse MCP Setup in OpenCode CLI).
“AI Persona” renamed to “AI Agent” (terminology + implementation) in #Featureai. Falco confirmed the rename work landed and pointed to the corresponding PR (Renaming AI Persona → AI Agent).
Hosted LLM capacity increased dramatically (plus model + performance upgrades) in #Announcementsai. Discourse reported higher plan limits across the board and described improvements like updated “state-of-the-art open weights” model, higher max tokens per request, better tokens/sec, and improved time-to-first-token (Unlock All Discourse AI Features with Our Hosted LLM).
Delaying AI Agent responses to match community pacing (1–4 hours) in Supportai. saurabhmithal asked for a configurable delay so Agents feel less like instant chatbots (Adding a configurable delay to AI Agent responses). In the broader “Agents” thread, Falco clarified it’s not possible today and positioned it as an automation capability, hinting at early planning for a major automation overhaul (AI bot - Agents); the original request is captured in-thread too (AI bot - Agents).
Discourse MCP setup guidance expands (OpenCode CLI) in usersaimcp. pacharanero posted a tested guide for installing Discourse MCP into OpenCode CLI, emphasizing that users can even point an LLM at the guide URL to help with setup (Discourse MCP Setup in OpenCode CLI). The guide cross-links to the Codex CLI variant for other MCP clients (Discourse MCP Setup in OpenAI Codex CLI).
Sentiment debugging details: “allowed internal hosts” and job execution in Supportai-sentiment. satonotdead reported resolving part of the issue by adding an internal IP to allowed internal hosts, then manually running Jobs::SentimentBackfill but still wanting complete historical backfill (Problems setting up Sentiment). Falco asked for clarification about whether at least the 60-day data appeared (Problems setting up Sentiment), and satonotdead confirmed dashboards work but the historical backfill is still the goal (Problems setting up Sentiment).
Whether Discourse should ship an official “OpenClaw skill” / agent that acts as a user (#Feature, ai)
In a Chinese-language feature thread, sniper756 proposed an OpenClaw-driven agent that can post/organize content on behalf of a user to build knowledge bases efficiently (Discourse官方会出个官方的openclaw skill么?). awesomerobot reiterated caution: Discourse is not enthusiastic about agents impersonating real users without an admin in the loop, pointing to the Meta ban policy context while leaving room for admin-support tooling (Discourse官方会出个官方的openclaw skill么?, openclaw plugin for discourse integration).
Copy/wording cleanup: duplicated “Default LLM” in an error string (ux, ai) Moin spotted an awkward repetition while looking at discourse_ai.ai_bot.agents.default_llm_required (“Default llm Default LLM…”) and reproduced it in the UI (Why is ‘Default LLM’ repeated…). awesomerobot confirmed it’s awkward and pointed to a fix in progress (Why is ‘Default LLM’ repeated…).
Prompting periodic AI summary reports to correctly capture vote counts (#Site_Management, automation, ai)
In the periodic summary reports how-to thread, julia1 asked how to craft a prompt so reports include the number of votes tied to feedback items (Discourse AI - Periodic summary reports).
MCP capability question: can MCP access PDF attachments in posts? (blog, ai, mcp)
In the Discourse MCP announcement thread, anaderi asked whether it’s possible for MCP to access PDF attachments uploaded to posts (Discourse MCP is here!).
Shauny
Proposed improving per-post translation ergonomics (especially on mobile) and suggested caching/saving translations to avoid repeated API calls (Translate post with AI and save translation). They clarified they didn’t want full auto-translate for the event—some friction was intentional—and instead wanted targeted per-post improvements (Translate post with AI and save translation).
Falco
Provided product-context on translation UX, noting the existing design expectation: enable automatic translation and let users toggle it in-topic (Translate post with AI and save translation). Separately, they helped diagnose tool-call reliability by stating Discourse AI’s read_timeout is 10 seconds and probing whether the external API exceeds that (Discourse ai 的工具调用超时如何解决?是否可以调整discourse超时时间,如何调整?).
sniper756
Started a feature request (in Chinese) asking if Discourse would ship an official OpenClaw “skill,” describing workflows where an AI agent performs forum operations and reposts authorized content into categorized/tagged knowledge bases (Discourse官方会出个官方的openclaw skill么?).
This week’s AI activity on Meta Discourse centered on making AI-powered localization more accurate and predictable, especially for small-but-important UI surfaces like tags and categories. Moin surfaced several “LLM-without-context” translation failures in AI-generated tag translations do not work perfectly, prompting nat to consider prompt improvements and adding extra grounding context such as tag descriptions (reply), while Falco explored tool-assisted approaches like letting the agent read relevant sources (idea, follow-up). Related “keep translations in sync” feedback also landed as feature requests for category name and category description updates (category names, category descriptions).
On the configuration side, a troubleshooting thread revealed confusion around what kinds of PMs get translated and how the UI communicates that. In Help me troubleshoot why AI is not translating PMs on my site, Moin clarified the current limitation (group PMs vs 1:1 PMs) (details), while Falco proposed a clearer multi-choice setting (proposal) and nat hinted that upcoming “translate these categories” controls could reshape the settings UX (plan).
Finally, there were incremental improvements and ecosystem enhancements: clearer messaging for semantic vs exact search results (search clarification), interest in refining AI persona behaviors to reduce noise (mention-only request), and continued adoption of AI summaries in UI via a theme component (feedback).
Interesting Topics
AI tag translations lack product/context grounding (and can get hilariously wrong) Moin documented how AI translations treat tags as isolated words, leading to incorrect or ambiguous results in AI-generated tag translations do not work perfectly. nat committed to improving prompts (response), and the discussion expanded into “how do we ground tag/category translations?” including feeding tag descriptions (idea), leveraging Crowdin glossary/localization choices (glossary suggestion), and giving the agent the ability to consult existing translations or sources (agent-access idea, code-grounding follow-up).
PM translation confusion: “100% translated” but nothing happens for 1:1 messages
In Help me troubleshoot why AI is not translating PMs on my site, tobiaseigen found PMs weren’t auto-translating despite the UI implying completion. Moin explained that current settings cover group PMs rather than direct 1:1 PMs (clarification), leading to a UX/settings rethink: Falco proposed replacing a boolean with explicit targets (settings proposal), and nat connected this to upcoming category-scoped translation controls (notes).
Feature request: keep translated category descriptions in sync with the “about” topic banner source
In Automatically update translated category descriptions, Moin highlighted a mismatch where the localized “about topic” translation updated, but the category banner description remained stale—suggesting translation sync should cascade to banner data.
Theme component: “AI summary in topic header” getting positive field feedback kaktak reported strong results with the component in AI summary in topic header (ai-summarize), signaling ongoing appetite for in-context AI summarization UI.
“Do we need an official OpenClaw skill?”—workarounds via scoped credentials
In Discourse官方会出个官方的openclaw skill么?, sniper756 concluded they could solve their integration need without a dedicated skill by provisioning a user with specific permissions and securely storing credentials.
Localization update: translated tags are now live (with a pointer to the main localization feature)
In the older feature thread Tags übersetzen, nat posted an update that tags are now translated, pointing readers to the main localization/translation feature announcement (related feature hub).
nat acknowledged the tag translation problems and committed to prompt improvements in AI-generated tag translations do not work perfectly, then explored more predictable grounding strategies like passing the tag description to the model (idea). In the PM translation discussion, they pointed to upcoming category-scoped translation controls and suggested the settings UI could be redesigned accordingly (reply). They also posted a status update that tag translation is now live in Tags übersetzen, referencing the broader localization initiative (content localization hub).
sniper756 closed the loop on an integration question by explaining they didn’t need an official skill after all, using a permission-scoped user credential approach in Discourse官方会出个官方的openclaw skill么?.
Thanks for reading, and I’ll see you again next week!
This week (2026-03-30 → 2026-04-06) on meta.discourse.org saw Discourse AI discussions cluster around three big themes:
MCP momentum and agent capabilities: Discourse AI doubled down on the Model Context Protocol with the announcement of client-side MCP support—letting Discourse AI agents call out to external MCP tool servers (Bring your own MCP!) and a full admin guide (AI Bot – Bring Your Own MCP Server). In parallel, the server-side MCP tooling kept evolving, including adding an edit tool so LLMs can update existing posts/wiki content via MCP (Discourse MCP is here!).
Moderation and privacy boundaries in AI automation: A practical moderation question—whether AI triage can scan private messages (DMs)—ended up being a UI/configuration gotcha rather than a hard limitation, and sparked follow-up ideas for clearer controls in the automation UI (Does AI triage automation scan DMs between regular users?, solution).
Model-specific quirks in localization and embeddings: Multiple threads highlighted that “AI features” are often “model behavior + integration details.” Translation issues ranged from German “AI commentary / thinking text” leakage that was fixed quickly (AI Commentary on German Translations) to missing images when translating via Mistral Small, which was mitigated by switching models (Images missing in translated posts when using Mistral as translation model). On the embeddings side, Mistral’s API mismatch (dimensions vs output_dimension) surfaced in configuration (Use Mistral for embeddings). There were also real-world admin bumps caused by deprecated Gemini model IDs in AI bot setups (Issue with AI bots forum bots).
Interesting Topics
Discourse AI agents can now connect to any MCP server (“Bring your own MCP”) (ai, #Announcements) sam announced that Discourse AI agents can register external MCP server URLs (GitHub, Notion, Linear, search providers, etc.) and then use discovered tools directly from the LLM agent (Bring your own MCP!). The companion how-to explains setup, tool discovery, and how this differs from JS-based custom tools (AI Bot – Bring Your Own MCP Server).
MCP usability: request for “remote/web MCP” + adding the ability to edit existing posts (ai, mcp, blog)
In ongoing MCP feedback, pacharanero explored how MCP could be made more accessible to non-CLI users via a web-published endpoint (Discourse MCP is here!). jrgong highlighted a KB/docs use-case needing edits to existing topics/posts (ref), and Falco confirmed an edit tool was added (“just update to latest”) (ref).
AI triage moderation + DM scanning: “Include personal messages” works, but ‘All topics’ caused confusion (automation, ai, Support) Denis_Kovalenko tested “Triage posts using AI” and found PMs between regular users weren’t being scanned (Does AI triage automation scan DMs between regular users?, test details). RGJ confirmed the PMs weren’t reaching audit logs and identified the workaround: leave “Topic Type” empty rather than “All topics” (ref). The fix worked immediately (ref), and the thread turned into a UX discussion about clearer options (ref, ref).
Translated German posts included “AI commentary/thought process” text—quickly fixed (ai, content-localization, bug, fixed) putty reported German translations leaking “thinking/translate” commentary into output (AI Commentary on German Translations). nat shipped an update to tighten formatting and cleaned up affected content (ref), with user confirmation afterward (ref).
Mistral translations dropped images in translated views (upload:// links), resolved by upgrading model (ai, content-localization, Support) Denis_Kovalenko found that switching the translation model from OpenAI to Mistral caused translated versions to render text but omit images (Images missing in translated posts when using Mistral as translation model, behavior details). RGJ suggested prompt hardening and/or trying a better model (ref), and switching from Mistral Small → Mistral Large fixed it (ref). Later, Falco asked for clarification on which “Mistral Small” was meant and recommended using stronger small-class models if needed (ref).
Embeddings with Mistral: OpenAI-compat config breaks on dimensions parameter naming (ai, #Feature) RGJ documented that configuring Mistral embeddings through an OpenAI-shaped integration fails if Discourse sends dimensions, because Mistral expects output_dimension (Use Mistral for embeddings). Removing the parameter makes the test succeed, suggesting a compatibility layer or provider-specific mapping may be needed (ref).
AI bot errors traced to deprecated Gemini model IDs + guidance for image generation models (ai, ai-bot, Support) ice.d ran into “Not found” errors with legacy bot configuration (Issue with AI bots forum bots). Lilly pointed out likely deprecation of gemini-2.5-flash-pre and suggested updating model URL/ID (including an image-capable option) (ref, config example), with NateDhaliwal sanity-checking whether any LLMs were configured (ref).
Should AI personas reply only to @mentions? Team leans toward workflows rather than niche toggles (ai, ai-bot, #Feature)
In an existing feature request, sam questioned whether “reply only to @mentions” is better as a default than as another setting (Allow AI Persona/Agent to respond only to @mentions…). Falco argued that edge cases are better served by upcoming project workflows—e.g., a mention-trigger workflow can handle the behavior without adding more switches (ref).
Agent response delay: workflows are expected to cover timing controls (ai, Support) sam noted that configurable delays for AI agent responses are the type of thing workflows should support, though not immediately; otherwise, the API path requires custom dev (Adding a configurable delay to AI Agent responses).
User-level control over AI (“disable AI nudges”) and PM translation settings migration (ai, ai-summarize, content-localization, ux/#Feature) paco argued that a per-user equivalent to discourse_ai_enabled could help people opt out of AI UI nudges without disabling AI site-wide (User Interface Preferences: include setting to disable AI nudges). Separately, translation settings changes continued to evolve around personal messages: nat linked a migration PR and described how prior “public content only” settings map into new category + PM targeting controls (AI translation of all PMs).
Activity
sam shipped MCP expansion and nudged future workflow-based automation:
This week’s AI-focused activity on Meta (covering 2026-04-06 → 2026-04-13) centered on practical integration details—especially around AI discoverability files, provider/model choice for GDPR-sensitive deployments, and translation robustness.
In parallel, there was continued emphasis on model/provider selection for embeddings and translation, especially for communities needing strong EU/GDPR alignment. In Use Mistral for embeddings, Falco shared a working configuration and suggested considering stronger embedding models; and in Images missing in translated posts when using Mistral as translation model, provider options and “zero data retention” surfaced as part of deciding what’s acceptable for compliance and risk.
Finally, translation quality issues got very “hands-on”: a new bug report described a cooked/markup error after translation, and Moin traced it to Markdown table formatting—fixing the source table resolved the translated output in Cooked error after translate and was confirmed by cuo_wu in the resolution.
Embeddings provider choice: using Mistral vs higher-scoring alternatives (ai#Feature)
In Use Mistral for embeddings, Falco shared a working setup and recommended considering embedding models that benchmark better (including Qwen-based embedding options). The broader thread frames Mistral as important for some deployments (including GDPR-oriented ones): Use Mistral for embeddings.
New bug report: “cooked error after translate” traced to Markdown table formatting (aibug)
A brand-new topic appeared this week: cuo_wu reported a translation/cooking issue that surfaced after switching languages in Cooked error after translate. Moin identified that missing leading/trailing | characters in Markdown tables can be tolerated in the original language but break translated rendering; fixing the table in English fixed the translation too (diagnosis + examples). cuo_wu confirmed the fix (confirmation). The report referenced content in Discourse FontAwesome Pro, which helped demonstrate the affected markup.
Workaround for delayed agent responses: external orchestrator bot + scheduled tagging (ai) saurabhmithal shared an implementation pattern for communities that want bots to participate less like autocomplete and more like a paced participant: use an external “orchestrator” bot (e.g., running via cron) that periodically checks categories and then tags the agent, combined with group restrictions so humans can’t directly trigger instant bot replies (Adding a configurable delay to AI Agent responses). The approach was referenced again from the related mention-only control discussion (Allow AI Persona/Agent to respond only to @mentions…).
Persona configuration request: making a “standard chat AI” that ignores Discourse context (aiai-bot)
In New AI Persona Editor for Discourse, Alon1 asked how to configure a Persona to behave like a generic chatbot (e.g., akin to claude.ai), explicitly not searching Discourse posts, user details, or even acknowledging it’s embedded in Discourse. Thread root: New AI Persona Editor for Discourse.
Discourse MCP deployment ergonomics: “recommended” sidecar approach? (aimcp)
In Discourse MCP is here!, pacharanero asked whether there’s a Meta-recommended way to run MCP as a sidecar service, and also noted an “edit tool” addition mentioned by Falco. Thread root: Discourse MCP is here!.
Compliance nuance: “zero data retention” vs GDPR compliance (and self-hosting) (ai)
The week featured a recurring theme: provider selection isn’t just about functionality—it’s about what your community can defensibly operate. In Images missing in translated posts when using Mistral as translation model, RGJ stressed that ZDR ≠ GDPR compliance, while Falco emphasized there are many ZDR provider options (same thread) and that embeddings can often be self-hosted more easily than full LLMs (also echoed in Use Mistral for embeddings).
There was also a practical support thread on language detection and manual overrides when posts are mixed-language (German + English titles), and how translation can appear “broken” due to external configuration issues like outdated API keys (see Post not being detected as German and the resolution). Separately, an admin-only locale-switching error turned out to be caused by a stale theme preview query parameter in Chrome (see Error when switching locale and the fix).
On the “AI platform” side, there was renewed interest in Discourse MCP connectivity (including Claude connectors and HTTP availability) (see Discourse MCP is here!, and confirmation that HTTP is supported). Finally, the long-running AI agents how-to thread received a new question about custom agent skills for tailored scenarios (see AI bot - Agents).
Trendline: most “AI issues” this week weren’t about output quality—they were about operational robustness (job behavior, retries, backend availability, and configuration visibility) (e.g., skipped translations, verbose logging, and retry behavior questions).
Interesting Topics
AI translations intermittently skip locales (initially observed as Portuguese missing) in bug Denis_Kovalenko reported that enabling many locales could lead to Portuguese not being generated (and later: any locale being skipped randomly), with titles and bodies translating inconsistently (see the original report: AI Translation skips Portuguese (pt) locale, clarifying settings: supported locales question, and the “randomly skipped locale” update: inconsistent results).
Debugging moved toward logs and deeper internals: nat suggested checking /logs and enabling the hidden ai_translation_verbose_logs setting (see hidden verbose logs suggestion), while RGJ later surfaced backend failures (503 unreachable_backend) affecting tags/topics/posts (see error output). The thread also raised implementation questions about why translation jobs are configured with retry: false (see retry question).
Mixed-language posts can confuse detection; manual language selection does force detection in Support putty shared a case where a German post wasn’t being translated, asking whether selecting German forces the language (see problem report). Falco confirmed that selecting a language does exactly that, and noted the post was mixed English/German with English titles influencing detection (see confirmation + explanation).
Translation “not working” traced to configuration (API key / provider) rather than the feature itself
In the same thread, putty initially saw no translation populate even after forcing it (see forcing translation didn’t help) and later noticed an error about the translated title being missing (see title missing error). Ultimately, the issue resolved when they corrected their translator setup (an old API key during a Claude plan switch) and switched back to CDCK’s LLM—after which title translation worked (see solution).
Composer UX change: locale selector moved into the composer toolbar Moin clarified that the language dropdown was moved into the composer toolbar, linking it to a core change (see before/after screenshots + PR reference). This came up while discussing translation workflows and manual entry (see follow-up preference discussion).
Admin-only “topic doesn’t exist / preview theme” error when switching locale is caused by a stale preview_theme_id Denis_Kovalenko reported an admin-only issue: switching interface language in a topic showed a persistent error about previewing a theme that doesn’t exist (see report). pmusaraj diagnosed it as a stuck ?preview_theme_id=ID parameter in Chrome (see diagnosis), and removing it resolved the issue (see solved confirmation).
Translation quality & limits: post size/context window, and model recommendations
While debugging sporadic translation gaps, nat mentioned a separate scenario where titles translated but bodies were skipped due to body size, and suggested checking the LLM context window settings; they also strongly advised against using “GPT mini” for translations based on customer feedback and early testing (see model + size/context notes). Denis_Kovalenko confirmed they had a very large context window configured (see context window detail).
Discourse MCP connectivity: request for Claude.ai connector support; HTTP already supported
In the blog thread about MCP, putty asked whether an HTTP/SSE streaming version of the Discourse MCP server might be released to use as a connector in Claude.ai Chat (see question). Falco replied that HTTP support already exists and pointed back to earlier replies in the announcement thread (see HTTP supported response).
AI Agents extensibility: request for custom skills in AI bot agents 赤丸的小烧酒 asked (in Chinese) whether agents can add custom skills for different scenario replies, seeking the ability to customize their own AI agent behavior (see custom skills request).
Activity
Denis_Kovalenko drove two localization/AI troubleshooting threads this week:
Opened and resolved an admin-only locale switching error in Error when switching locale, confirming the solution after removing a stale query parameter in the accepted fix.
Also noted difficulty finding hidden settings in this question.
pmusaraj focused on diagnosis and narrowing down configuration causes:
In Post not being detected as German, challenged an unrelated-commit hypothesis, asked about the LLM in use, and flagged Gemini deprecations contextually (within the same reply).
Requested a new topic for a separate issue report once the German-detection thread was solved (see request).
RGJ helped operationalize debugging and surfaced concrete failure signals:
Reported specific backend errors (503 unreachable_backend) and questioned job retry configuration in this key diagnostic post.
Moin pointed to docs and clarified UI changes affecting localization workflows:
Explained that ai_translation_verbose_logs is a hidden site setting and linked the relevant guide in this reply (and the referenced doc: Using hidden site settings).
Documented that the composer language selector moved into the toolbar (with screenshots and a PR reference) in Post not being detected as German.
putty contributed heavily across translation support and MCP discussion:
Raised the mixed-language detection/translation issue in Post not being detected as German, shared failed attempts to force translation in this follow-up, and later confirmed the real cause was an outdated API key / provider mismatch in the solution.
Asked about Claude.ai connector compatibility via HTTP/SSE streaming in Discourse MCP is here!.
Also expressed a UI preference about the old locale selector placement in this comment.
Falco answered usage questions and clarified MCP capabilities:
Confirmed that selecting a language manually forces the post language, and explained why mixed-language titles can skew detection in Post not being detected as German.
canbekcan explored translation workflow issues and hypotheses around recent changes:
Suggested a “select language first, then add title/content” workflow and described needing to recreate language options in Post not being detected as German.
Investigated a “missing title” problem, initially suspecting theme-related behavior in this reply, then reported they could reproduce errors and referenced recent code changes in this post.
Clarified they don’t use AI translation (academic requirements) and closed out their participation after UI clarification in this note.
赤丸的小烧酒 added an AI agents product-direction question by asking about agent extensibility through custom skills in AI bot - Agents.
Thanks for reading, and I’ll see you again next week!