This week’s AI conversations on Meta centered on making Discourse AI clearer to users and easier to operate at scale. On the product side, there was strong momentum to rename “AI Persona” to the more widely understood “AI Agent” (with translation workflow implications) in Renaming AI Persona → AI Agent and follow-ups like Renaming AI Persona → AI Agent. Admin experience also got attention: sites with AI disabled were still seeing AI dashboards/reports, which was confirmed as a bug and routed into broader reporting work in Don’t show AI reports if AI is not enabled and the related umbrella thread Admin Reporting & Analysis: Incremental Changes.
Theme component maintenance: AI Gists button compatibility with Modernized Foundation (#Theme-component, ai) Lilly refactored a theme component so formatting works with Modernized Foundation while keeping compatibility with the old Foundation theme in Discourse Topic Excerpts & AI Gists Button, tying into the broader theme effort in Modernizing the Foundation theme.
rburkej added a self-hosting perspective by asking for a detailed hardware profile and operational impact notes in Enabling AI search crippled my server, reinforcing that semantic search rollout needs clear sizing guidance.
Over the past week (2026-03-09 → 2026-03-16), Meta’s ai discussions clustered around product polish, reliability, and “real world” operations.
On the product side, Discourse moved closer to standardizing terminology by implementing the rename from AI Persona to AI Agent (Renaming AI Persona → AI Agent). On the infrastructure side, Discourse significantly expanded capacity for its hosted LLM offering—raising limits across all tiers and improving model quality and latency characteristics (Unlock All Discourse AI Features with Our Hosted LLM).
Meanwhile, operators focused on how AI fits into community rhythms: a request to delay AI Agent replies (so they feel less like a chatbot and more like a participant) surfaced both as a new Support topic (Adding a configurable delay to AI Agent responses) and as a follow-up in the longer-running “Agents” guide thread, where Discourse staff indicated that delayed responses would likely belong in a future automation overhaul rather than ai itself (AI bot - Agents).
Integration conversations had a notable bump too: Google’s Programmable Search / Custom Search constraints and deprecations are forcing a rethink of web search tooling, with Discourse exploring alternative providers and even “native search tools” from LLM vendors (Google Search for Discourse AI - Programmable Search Engine and Custom Search API). In parallel, community guides continued to expand around the Discourse MCP ecosystem, including a newly posted OpenCode CLI setup walkthrough (Discourse MCP Setup in OpenCode CLI).
“AI Persona” renamed to “AI Agent” (terminology + implementation) in #Featureai. Falco confirmed the rename work landed and pointed to the corresponding PR (Renaming AI Persona → AI Agent).
Hosted LLM capacity increased dramatically (plus model + performance upgrades) in #Announcementsai. Discourse reported higher plan limits across the board and described improvements like updated “state-of-the-art open weights” model, higher max tokens per request, better tokens/sec, and improved time-to-first-token (Unlock All Discourse AI Features with Our Hosted LLM).
Delaying AI Agent responses to match community pacing (1–4 hours) in Supportai. saurabhmithal asked for a configurable delay so Agents feel less like instant chatbots (Adding a configurable delay to AI Agent responses). In the broader “Agents” thread, Falco clarified it’s not possible today and positioned it as an automation capability, hinting at early planning for a major automation overhaul (AI bot - Agents); the original request is captured in-thread too (AI bot - Agents).
Discourse MCP setup guidance expands (OpenCode CLI) in usersaimcp. pacharanero posted a tested guide for installing Discourse MCP into OpenCode CLI, emphasizing that users can even point an LLM at the guide URL to help with setup (Discourse MCP Setup in OpenCode CLI). The guide cross-links to the Codex CLI variant for other MCP clients (Discourse MCP Setup in OpenAI Codex CLI).
Sentiment debugging details: “allowed internal hosts” and job execution in Supportai-sentiment. satonotdead reported resolving part of the issue by adding an internal IP to allowed internal hosts, then manually running Jobs::SentimentBackfill but still wanting complete historical backfill (Problems setting up Sentiment). Falco asked for clarification about whether at least the 60-day data appeared (Problems setting up Sentiment), and satonotdead confirmed dashboards work but the historical backfill is still the goal (Problems setting up Sentiment).
Whether Discourse should ship an official “OpenClaw skill” / agent that acts as a user (#Feature, ai)
In a Chinese-language feature thread, sniper756 proposed an OpenClaw-driven agent that can post/organize content on behalf of a user to build knowledge bases efficiently (Discourse官方会出个官方的openclaw skill么?). awesomerobot reiterated caution: Discourse is not enthusiastic about agents impersonating real users without an admin in the loop, pointing to the Meta ban policy context while leaving room for admin-support tooling (Discourse官方会出个官方的openclaw skill么?, openclaw plugin for discourse integration).
Copy/wording cleanup: duplicated “Default LLM” in an error string (ux, ai) Moin spotted an awkward repetition while looking at discourse_ai.ai_bot.agents.default_llm_required (“Default llm Default LLM…”) and reproduced it in the UI (Why is ‘Default LLM’ repeated…). awesomerobot confirmed it’s awkward and pointed to a fix in progress (Why is ‘Default LLM’ repeated…).
Prompting periodic AI summary reports to correctly capture vote counts (#Site_Management, automation, ai)
In the periodic summary reports how-to thread, julia1 asked how to craft a prompt so reports include the number of votes tied to feedback items (Discourse AI - Periodic summary reports).
MCP capability question: can MCP access PDF attachments in posts? (blog, ai, mcp)
In the Discourse MCP announcement thread, anaderi asked whether it’s possible for MCP to access PDF attachments uploaded to posts (Discourse MCP is here!).
Shauny
Proposed improving per-post translation ergonomics (especially on mobile) and suggested caching/saving translations to avoid repeated API calls (Translate post with AI and save translation). They clarified they didn’t want full auto-translate for the event—some friction was intentional—and instead wanted targeted per-post improvements (Translate post with AI and save translation).
Falco
Provided product-context on translation UX, noting the existing design expectation: enable automatic translation and let users toggle it in-topic (Translate post with AI and save translation). Separately, they helped diagnose tool-call reliability by stating Discourse AI’s read_timeout is 10 seconds and probing whether the external API exceeds that (Discourse ai 的工具调用超时如何解决?是否可以调整discourse超时时间,如何调整?).
sniper756
Started a feature request (in Chinese) asking if Discourse would ship an official OpenClaw “skill,” describing workflows where an AI agent performs forum operations and reposts authorized content into categorized/tagged knowledge bases (Discourse官方会出个官方的openclaw skill么?).