This week, AI discussions spanned search UX improvements, localization quirks, embeddings configuration, and bug fixes post-upgrade. Administrators and developers delved into making AI search results more discoverable, tweaking Gemini embeddings settings, and resolving errors introduced in the 3.6.0 beta. Community members also explored using Discourse for niche groups, tested local LLM integrations, and reported composer glitches with pasted images. Major themes hit #ai-support, ai-search, Bug, Community, and UX.
âWe just deployed a big improvement to the underlying tech that powers semantic searchâŚâ â Falcoref
âMany users are still wary of AI so they donât toggle the switchâŚâ â RBoyref
Interesting Topics
Hiding XX results found using AIâenable toggle by default (Supportaiai-search) RBoy kicked off the thread by noting that AI search results were hidden by default. NateDhaliwal pointed to existing docs ref, and Falco explained how it now auto-toggles when native results are missing ref. A temporary theme script to force the toggle was also shared ref.
Falsche Ăbersetzung, wenn post locale = UI locale (Bugaidynaloccontent-localization) Jakob_Naumann reported that English posts were showing up in German after a default-locale change ref. Falco recommended purging and re-creating the localization cache ref.
Gemini API Embedding Configuration Clarification (Supportai) RBoy asked what sequence length maps to in the embedding settings and how to throttle API backfills to avoid 429 errors ref. Falco confirmed sequence length equals the modelâs token capacity (2048) and pointed out the hidden ai_embeddings_backfill_batch_size setting ref.
Exploring Reranking Options for Discourse AI (Supportai)
In a longer-running thread, Falco rolled out a major semantic search improvement ref, expected to reduce reliance on external rerankers. tpetrov queried support for uploaded RAG documents versus forum topics ref, and Falco confirmed the change covers all embedding use cases ref.
Would this work for a community of women over 45+ (Communityai) bessnlj wondered if Discourse with AI-powered search fits a niche dating/coaching site ref. tobiaseigen clarified that meta.discourse.org is for Discourse hosts but encouraged spinning up a trial for custom use ref and pointed to existing communities for inspiration ref.
Local Ollama is not working with the Plugin (Supportai) Tikkel faced an âInternal Server Errorâ when Discourse called the Ollama service, despite successful cURL tests ref. Falco asked for container logs ref, and Tikkel confirmed that adjusting DISCOURSE_ALLOWED_INTERNAL_HOSTS syntax to use pipes solved it ref.
Gemini Embeddings Issue After Discourse Update to 3.6.0 Beta 2 (Supportai)
Upgrading to 3.6.0.beta2 broke embedding tests for RBoy, who spotted that Geminiâs old embedding-001 model was retired ref. He fixed it by switching to gemini-embedding-001 in the plugin settings ref.
Embedding error with 3.6.0 beta 2 (Bugai)
A related bug, reported by RBoy, showed âinvalid input syntax for type halfvec: â[NULL]ââ when querying embeddings post-upgrade ref. This points to null-vector handling issues in the new release.
After sending the image, add this to the beginning of the message: [object InputEvent] (Bugai) kuaza discovered that copy-pasting images into the composer prepends an [object InputEvent] string ref. Uploading via the file selector avoids the glitch, leading to further UX tweaks.
How to solve discourse ai: No endpoints found that support tool use (Supportai) whitewaterdeu saw a 404 âNo endpoints found that support tool useâ error testing OpenRouterâs qwen3-8b model ref. Disabling native tool integration resolved the issue ref.
Weekly AI Activity Summary: 2025-10-20 â 2025-10-27
Overview
This weekâs ai discussions on Meta spanned API troubleshooting, content localization, user feedback on translations, and plugin bug fixes.
In the Support category, Enit kicked off a deep dive by encountering a 400 Bad Request Using API error when trying to create a topic via the REST API. supermathie pointed them to the Discourse REST API documentation and requested authentication details ref. Moin then asked for log excerpts ref and clarified whether this was purely an API issue or tied to the AI plugin ref. Ultimately, the mystery was solved when the proper logs surfaced under /logsref.
On the #Site Management front, the Content Localization - Manual and Automatic with Discourse AI discussion saw cmdntd propose making the tl URL parameter available to all users, not just guests ref. Falco clarified that it currently applies globally across the site ref. wenqin then tested the feature and suggested a âdefault (no translation)â option for multilingual learners ref, and Moin helped locate the toggle for viewing original content ref. Wenqin confirmed the solution worked perfectly ref.
400 Bad Request Using API (Supportrest-apiai): Enitâs API call returned 400 Bad Request, leading supermathie and Moin through authentication checks, log diagnostics, and plugin context clarifications.
Enit: spearheaded the API deep dive with five postsâinitial report ref, plugin context clarification ref, log details ref, solution confirmation ref, and broader AI memory discussion ref.
Weekly AI Activity Summary: 2025-10-27 to 2025-11-03
Overview
This week on meta.discourse.org saw vibrant discussions across ai, content-localization, Feature, and Support. From multilingual translation preferences and concerns over undisclosed auto-translation, to hidden configuration settings and feature requests for AI-powered formatting, the community dove deep into how Discourse AI can be more flexible and transparent. Contributors also tackled LLM errors, debated structured output requirements, sought cost estimates for AI features, and explored token limits for embeddings. Overall, the focus remained on improving user control, enhancing reliability, and broadening provider support.
I think discourse-ai API needs a regression (aiDev) MoRanYue proposed dropping structured output for broader provider support in I think discourse-ai API needs a regression. Falco explained why structured output matters in post 2, and MoRanYue offered XML-like separators as an alternative in post 3.
Over the past week, the community dove into several translation and rate-limit challenges, as well as fresh plugin releases and AI helper configuration issues. Key themes included:
AI Persona Stability: Reports of the AI bot entering infinite loops and spamming highlighted the need to calibrate LLM temperature parameters (AI bot infinite loop and spamming).
Plugin Spotlight: The new llms.txt generator plugin promises to make forum content discoverable by LLMs (Discourse llms.txt Generator Plugin).
Rate Limits & Budget Errors: Discussions surfaced around Gemini Proâs thinking budget constraints and cost-input minimums, uncovering unexpected validation errors in both free and paid tiers (Gemini Pro thinking budget error, AI model cost input restriction).
Below are the 10 most interesting topics from the week, followed by a breakdown of who said what.
AI bot infinite loop and spamming (Supportai-bot) wisewords reported that after creating a new persona the AI began delaying responses and posting repetitive spam in AI bugging out, having a mental breakdown, and Falco explained it was due to the LLM hitting an infinite generation loop at certain temperature settings in post 2.
Staff override for translation max age (Featuretranslationaicontent-localization) jrgong requested the ability for staff to bypass the AI translation backfill max age days setting when manually translating older posts in post 1, and Falco tested and confirmed the manual translation button already overrides this backfill restriction in post 3.
Resetting Proofreader settings (Supportai-helper) bksubhuti sought guidance on restoring missing Proofreader options in the AI helper menu in post 1, and Moin and OP resolved it by rebuilding after correcting trust level configurations in post 4.
Missing language switcher after auto-translation (Supportaicontent-localization)
After successfully backfilling translations, EasyChen could not see the language switcher on translated posts in post 1, with nat guiding them through enabling the site setting and checking post locale detection in post 2 and post 7.
LLM and Discourse AI settings hidden (Supportai) Nima1 reported missing LLM tabs in the AI plugin on a Persian-locale site in post 1, and nat clarified that the âDiscourse AI enabledâ setting must be saved first to reveal the rest in post 2.
Default LLM model dropdown empty (Supportai) undasein was unable to select a default model due to an empty dropdown in post 1, and NateDhaliwal pointed them to configure LLMs under âPlugins > AI > LLMsâ in post 2.
Gemini Pro thinking budget error (Bugai) RBoy encountered a âBudget 0 is invalidâ error when setting a zero or negative budget for the gemini-pro-latest model in post 1, and the team acknowledged they will investigate in post 2.
AI model cost input restriction (Bugai) RBoy noted that the cost fields for AI model input and output prevent values below 0.1 in Canât enter AI model cost of less than 0.1, overriding entries like 0.075 back to zero.