Las discusiones de esta semana en meta.discourse.org giraron en torno a la exploración de nuevos modelos de IA, la configuración de la detección de spam impulsada por IA, la gestión de personas e indicaciones de IA, y el aprovechamiento de las capacidades de búsqueda semántica. Los usuarios compartieron sus experiencias con varios modelos de lenguaje, discutieron posibles mejoras y buscaron orientación sobre la optimización de las funciones de IA dentro de la plataforma Discourse.
Temas Interesantes
Don inició una discusión sobre Probando los nuevos modelos de IA, compartiendo sus experiencias con modelos como grok-2-1212 y Gemini Flash 2. Se destacaron los desafíos con la detección de idiomas, los problemas de formato y las capacidades de búsqueda.
Sam anunció un nuevo módulo de detección de spam impulsado por IA en Discourse AI, que proporciona un mejor control sobre el escaneo de spam. Compartió una guía de configuración y animó a los usuarios a compartir sus experiencias.
MarcP buscó ayuda con la API de Búsqueda Semántica, y Sam proporcionó orientación sobre cómo acceder a la búsqueda de embeddings puros y usar claves API para eludir los límites de tasa.
roniw planteó preocupaciones sobre la privacidad y el acceso a los datos al usar los proveedores de LLM predeterminados de Discourse, a lo que Sam respondió, asegurando que los LLM de Discourse se autoalojan y los datos no se comparten con terceros.
Don inició la discusión sobre Probando los nuevos modelos de IA, compartiendo sus experiencias y buscando el consejo de MihirR y Sam sobre cómo mejorar la detección de idiomas y las capacidades de búsqueda.
MarcP buscó ayuda con la API de Búsqueda Semántica, y Sam proporcionó orientación sobre cómo acceder a la búsqueda de embeddings puros y usar claves API.
roniw planteó preocupaciones sobre la privacidad y el acceso a los datos al usar los proveedores de LLM predeterminados de Discourse, y Sam proporcionó garantías sobre las prácticas de autoalojamiento y privacidad de datos de Discourse.
Esta semana, en el foro meta de Discourse, las discusiones se centraron en la función de detección de spam impulsada por IA, la herramienta propuesta “Solved Finder”, problemas con el resumen de IA y varios informes de errores y comentarios relacionados con las capacidades de IA. Los usuarios compartieron ideas, informaron errores y sugirieron mejoras, fomentando un entorno colaborativo para mejorar la integración de IA dentro de la plataforma Discourse.
Temas Interesantes
Detección de spam impulsada por IA: Los usuarios discutieron el lanzamiento de la función de detección de spam impulsada por IA, incluida la confusión sobre cómo habilitarla en cuentas alojadas y los problemas con el comportamiento del interruptor. @sam proporcionó una aclaración sobre el proceso, y @Saif mencionó una próxima solución para el problema del interruptor.
Buscador de soluciones: @Don propuso agregar un botón para que los usuarios del personal encuentren posibles soluciones dentro de un tema, similar a la función de resumen de IA. @sam sugirió explorar indicaciones personalizadas o una opción de “Hablar con mi tema” para abordar esta necesidad.
[grid] de visión de IA en el chat]([grid] from AI vision in chat): @zogstrip compartió que @j.jaffeux había solucionado un problema relacionado con la visualización de imágenes generadas por IA en la función de chat, proporcionando una captura de pantalla de la solución que funcionaba.
El relleno de resumen de IA está desperdiciando tokens al resumir PM: @markschmucker planteó una preocupación sobre el proceso de relleno de resumen de IA que resume mensajes privados (PM), lo que podría ser ineficiente. @Falco reconoció el problema y mencionó la posibilidad de introducir una configuración para omitir los PM, similar a la configuración existente para incrustaciones.
No se puede cambiar ai_embeddings_model: @Overgrow informó un problema al cambiar la configuración ai_embeddings_model y habilitar las incrustaciones a través del panel de administración. @sam proporcionó un enlace a una solicitud de extracción que tenía como objetivo solucionar el problema.
¿Cómo funciona la regeneración de resúmenes?: @markschmucker compartió su experiencia con la regeneración de resúmenes utilizando un LLM diferente y los desafíos que enfrentó cuando el proceso de relleno se detuvo después de alcanzar el límite de edad especificado, a pesar de que algunos temas fallaron debido a límites de tasa.
Comentarios sobre el resumen de IA: @markschmucker buscó aclaraciones sobre cómo abordar resúmenes no válidos, refiriéndose a un comentario anterior de @sam sobre posibles soluciones a través de refactorización o ajuste de personas.
@Don propuso la idea de un botón “Buscador de soluciones” para que los usuarios del personal encuentren posibles soluciones dentro de un tema, similar a la función de resumen de IA.
@MachineScholar compartió su perspectiva sobre el dilema de resumir PM, sugiriendo que la decisión debería basarse en el tipo de comunidad y el valor potencial de generar incrustaciones para los PM.
@markschmucker compartió su experiencia con la regeneración de resúmenes utilizando un LLM diferente y los desafíos que enfrentó cuando el proceso de relleno se detuvo después de alcanzar el límite de edad especificado, a pesar de que algunos temas fallaron debido a límites de tasa.
La semana pasada en meta.discourse.org, hubo una gran variedad de discusiones sobre las funciones de IA de Discourse, con usuarios compartiendo sus experiencias, brindando comentarios y buscando soporte. La comunidad exploró la integración de Discord y Discourse, el potencial de los bots de preguntas y respuestas impulsados por IA y las capacidades de detección de spam impulsada por IA. Además, hubo conversaciones sobre la nueva función Smart Dates en Helper, el AI Persona Editor y la disponibilidad de funciones de IA en los planes Standard y Business.
Temas Interesantes
¿Usas Discord y Discourse? inició una discusión sobre el uso simultáneo de Discord y Discourse, sus respectivas fortalezas y el potencial de un bot de Discord de preguntas y respuestas impulsado por IA que aproveche Discourse como base de conocimiento.
Escribe fechas más inteligentes con IA presentó la nueva función Smart Dates en Helper, que convierte tiempos y fechas escritos por humanos en formatos compatibles con Discourse y amigables con la zona horaria.
Detección de spam impulsada por IA destacó la eficiencia del sistema de detección de spam impulsado por IA de Discourse, con usuarios informando experiencias positivas en la identificación y ocultación de publicaciones de spam.
Discourse AI - Triaje de IA exploró la función de triaje de IA, que automatiza la categorización de temas utilizando modelos de IA, lo que lleva a discusiones sobre la escalabilidad y configuración de esta función.
La corrección de pruebas es demasiado creativa destacó un caso en el que la función de corrección de pruebas de IA en Helper hizo sugerencias demasiado creativas, lo que provocó una discusión sobre el nivel apropiado de creatividad para tales funciones.
Solicitudes de funciones para resumen de IA recopiló solicitudes de mejoras para la función de resumen de IA, incluida la priorización del resumen de temas actualizados recientemente y la capacidad de omitir el resumen de mensajes personales.
Cómo ocultar el bot por completo, para todos abordó la solicitud de un usuario de ocultar el bot de IA de Discourse de su comunidad, lo que llevó a una discusión sobre las limitaciones y las posibles soluciones.
Discourse AI - Helper continuó la discusión en curso sobre la función Helper, con usuarios brindando comentarios y buscando aclaraciones sobre su funcionalidad.
Actividad
Saif compartió información sobre el uso simultáneo de Discord y Discourse, destacando el potencial de un bot de Discord de preguntas y respuestas impulsado por IA que aproveche Discourse como base de conocimiento.
rburkej discutió su preferencia por Discourse sobre Discord para datos generados por usuarios y comunidades de juegos, al tiempo que reconoció la ubicuidad de Discord entre los jugadores.
Esta semana en el foro meta.discourse.org, las discusiones giraron en torno a varios aspectos del plugin Discourse AI, incluyendo nuevas funciones, mejoras y consultas de los usuarios. La herramienta de detección de spam impulsada por IA atrajo atención, y los usuarios buscaron aclaraciones sobre su configuración y uso. Además, se presentó la función “Ask Discourse”, que permite a los usuarios interactuar con un personaje de IA para obtener ayuda con la documentación. Otros temas incluyeron tablas de sentimientos ordenables, manejo de contenido detallado con IA, regeneración de resúmenes de temas y gestión de cuotas de uso de IA.
Temas interesantes
@per1234 destacó un problema donde los LLM predeterminados no se podían seleccionar en la pestaña ‘Spam’, a pesar de estar configurados. @sam reconoció el problema y proporcionó una solución.
@Saif presentó la función Ask Discourse, que permite a los usuarios interactuar con un personaje de IA para buscar en la documentación y responder preguntas comunes relacionadas con Discourse.
@noahl solicitó la capacidad de ordenar las tablas de sentimientos de mayor a menor número, lo que facilita la identificación y el abordaje de los sentimientos comunes.
@c12gene planteó una preocupación sobre la incapacidad del bot de IA para leer y procesar contenido detallado encapsulado en etiquetas de [resumen], lo que provocó discusiones sobre la ingeniería de prompts y el uso de herramientas.
@awesomerobot proporcionó una actualización sobre el problema del cierre de la caja de resumen de IA, con un enlace a la solicitud de extracción relevante.
@EricGT compartió información sobre evaluaciones de prompts de LLM, enfatizando la importancia de comprender la efectividad de los prompts para el desarrollo de LLM.
@per1234 informó un error por el cual las publicaciones y las cuentas no siempre se restauraban cuando se rechazaba una marca de la detección de spam de Discourse AI.
@Moin buscó orientación sobre ocultar el bot por completo para todos los usuarios, lo que provocó una discusión sobre la visibilidad del bot en los resultados de búsqueda.
@awesomerobotcompartió una actualización sobre el problema del cierre de la caja de resumen de IA, con un enlace a la solicitud de extracción relevante.
@markschmuckerpreguntó sobre limitar la cantidad de tokens de IA que un usuario puede usar en un día, y @samproporcionó un enlace a una solicitud de extracción próxima que aborda esta función. @MachineScholarbuscó aclaraciones sobre los detalles de implementación, y @samexplicó que los límites se aplicarían por usuario, no se compartirían entre los miembros del grupo.
@EricGTcompartió información sobre la importancia de las evaluaciones de prompts de LLM para una ingeniería de prompts efectiva.
@per1234informó un error por el cual las publicaciones y las cuentas no siempre se restauraban cuando se rechazaba una marca de la detección de spam de Discourse AI.
@Moinbuscó orientación sobre cómo ocultar el bot por completo para todos los usuarios, lo que provocó una discusión con @Saif sobre la visibilidad del bot en los resultados de búsqueda.
@BrianCpreguntó sobre la posibilidad de permitir a los usuarios cargar documentos para que el bot de IA los lea y procese, y @Saifaclaró que este caso de uso actualmente no es compatible.
@MarcPpreguntó sobre la actualización de resúmenes para temas de una sola publicación que se editan con frecuencia.
This week on the meta.discourse.org forum, discussions revolved around various AI-related topics, including a proposed moderation tool for formatting code using AI, the ongoing development of Discourse AI features, issues with AI summarization backfills, and the potential for uploading and discussing PDFs within the composer. Additionally, there were conversations about setting usage limits for AI, evaluating costs between different AI providers, handling AI-generated spam, and exploring the capabilities of AI bots and custom tools.
Interesting Topics
@merefield raised an idea for a moderation tool that would allow trusted users to format code blocks using AI, potentially improving readability and assisting new users who struggle with proper code formatting.
Discussions continued on the Discourse AI plugin, with @sam expressing interest in allowing users to upload large files and ask questions about the content using a persona-based approach.
@markschmucker encountered an issue where the AI summarization backfill process kept regenerating summaries for the same topic, even after a valid summary was already present. This led to a fix by @Roman to make the job more resilient.
@BrianC proposed a feature request to allow users to upload PDFs or text files directly in the composer and have the AI process and respond to questions about the content.
There was a discussion around setting per-group token and usage limits for AI features, with @sam clarifying that quotas are defined per group and applied per user, rather than being shared among users.
There was a bug report regarding posts and accounts not being restored when flags from Discourse AI spam detection were rejected, which @sam addressed with a fix.
Discussions took place around self-hosting embeddings for DiscourseAI, with @sam mentioning ongoing work to restructure the embedding configuration and plans to support multi-model embeddings.
@David_Ghost inquired about the ability of AI Triage to perform searches and avoid topics with similar titles based on creation dates.
@David_Ghost inquired about the ability of AI Triage to perform searches and avoid topics with similar titles based on creation dates, with @sam indicating that such “agent-like” behaviors are being considered.
@c12gene reported an issue with the AI bot not being able to read summaries and detailed content, which was addressed by @MachineScholar through system prompt improvements.
@BrianC asked about tying token limits to subscriptions and allowing more expensive models to be used for a fee, with @sam confirming that different quotas can be set for different user groups.
This week on meta.discourse.org, the Discourse team continued to enhance and refine the AI capabilities within the platform. Key discussions revolved around improving the AI summarization feature, managing costs through usage quotas, and integrating new language models. The community also explored potential use cases, such as formatting code with AI assistance and enabling document uploads for AI analysis.
Interesting Topics
Translate Discourse automatically (without a button) (ref)
The team provided updates on their progress towards enabling automatic translation of Discourse topics using AI models. While the initial target is to have the topic page translatable via a language toggle, challenges around search functionality and handling multilingual content remain.
A new feature was released that allows administrators to define usage quotas for AI models, enabling better cost control and fair access to AI features across the community.
sam proposed the idea of allowing certain user groups access to an “AI Helper” feature for formatting source code in posts, acknowledging the potential performance trade-offs.
AI Plugin Causes All Posts to Be Unreadable in Latest Discourse Version (ref)
shannon1024 reported a critical bug where the AI plugin rendered all posts unreadable after updating to the latest Discourse version. The issue was related to the AI embeddings configuration and was resolved by disabling the relevant setting.
Saif provided guidance on factors to consider when choosing a Large Language Model (LLM) for Discourse AI, including performance, context length, compatibility, language support, multimodal capabilities, and speed.
DeepSeek provider support? What to do when model provider isn’t in “Provider” list? (ref)
MachineScholar inquired about integrating the DeepSeek R1 model, which Falco provided a solution for by configuring it as an OpenAI model.
Saif introduced the AI usage page, designed to help administrators understand how the community is utilizing Discourse AI features over time, aiding in cost estimation and management.
shannon1024 reported a critical bug where the AI plugin rendered all posts unreadable after updating Discourse, which was resolved by disabling the AI embeddings setting.
Saif provided guidance on factors to consider when choosing an LLM for Discourse AI, such as performance, context length, compatibility, language support, multimodal capabilities, and speed.
Esta semana en meta.discourse.org, las discusiones se centraron en temas, características e integraciones relacionadas con la IA. Los temas clave incluyeron la depuración del chat de IA, la configuración de la detección de spam de IA, la utilización de modelos de razonamiento como DeepSeek-R1 y la exploración de las capacidades del plugin AI Helper. Los usuarios compartieron ideas sobre comparaciones de costos, evaluaciones de rendimiento y posibles casos de uso para varios modelos de lenguaje.
Temas Interesantes
@dsims encontró un problema con la función de ilustración de publicaciones de AI Helper, lo que llevó a una discusión sobre la actualización de la configuración requerida. [ref](AI-Helper post illustration error. ai_openai_api_key missing)
@oppman buscó orientación sobre la integración de IA con sitios externos y repositorios de GitHub para una comunidad de desarrolladores de software. [ref](AI Integration with Specific External Sites and GitHub)
Los usuarios discutieron el costo y rendimiento del modelo de razonamiento DeepSeek-R1 en comparación con otros modelos de lenguaje como GPT-4 y Claude. [ref](DeepSeek provider support? What to do when model provider isn’t in "Provider" list?)
@MachineScholar informó un error aleatorio al interactuar con el modelo DeepSeek-R1 a través del Bot de IA. [ref](DeepSeek-R1 randomly producing "Job exception: undefined method `finish’ for nil" error)
@Eric_Platzek buscó ayuda para depurar el chat de IA y aclarar el uso y los costos asociados con diferentes modelos de lenguaje. [ref](Debugging AI chat)
@markschmucker compartió su experiencia con la función AI Web Artifacts y preguntó sobre la edición de código generado y el inicio de nuevos contextos con artefactos existentes. [ref](Announcing: AI Web Artifacts)
@NKERIFAC_CLAUD_NBAPNON encontró un problema con la configuración de incrustaciones de Discourse AI después de una actualización, lo que llevó a una discusión sobre la solución de problemas y los pasos de configuración. [ref](Problems with Discouse AI embeddings configuration)
Actividad
@sam proporcionó información sobre la exportación de publicaciones del foro para su carga manual en modelos de lenguaje externos utilizando el Explorador de datos. [ref](Exporting all Forum Posts for Manual Upload into External LLMs?)
@Roman_Rizzi compartió actualizaciones sobre la resolución de problemas relacionados con el plugin de IA que causa que las publicaciones no sean legibles en la última versión de Discourse. [ref](AI Plugin Causes All Posts to Be Unreadable in Latest Discourse Version)
@per1234 informó un problema en el que las publicaciones y las cuentas no siempre se restauraban cuando se rechazaban las marcas de la detección de spam de Discourse AI, lo que llevó a una discusión sobre posibles soluciones. [ref](Posts and account not always restored when flag from Discourse AI spam detection rejected)
@dsims sugirió agregar un parámetro de calidad para la función de generación de imágenes Dall-E 3 para reducir costos. [ref](Image sizes of Dall-E 3?)
@StefanoCecere compartió una sugerencia para incluir el nombre del día al generar fechas con el Asistente de IA. [ref](Write out smarter dates with AI)
@BrianC expresó interés en la carga y discusión de PDFs en la función de composición. [ref](Upload and discuss pdfs in composer)
Over the past week (2025-02-03 to 2025-02-10), AI discussions on meta.discourse.org have been both vibrant and diverse. Active debates ranged from fine-tuning the AI helper’s ability to distinguish between Discourse and Discord to exploring innovative ways to pass data to artifacts and chain multiple AI triage scripts. Users focused on enhancing features (as seen in discussions like AI helper does not know the difference between Discourse and Discord and post by sam in Support and ai) while also addressing bugs and configuration issues in sentiment analysis and semantic search. In-depth posts covering topics such as flexible artifact embedding (How do I pass data to an artifact? through post 7) and automated AI workflows (AI + Automation Governance: Orchestrating Independent AI Triage Scripts to post 8) demonstrated the community’s commitment to practical innovation. Meanwhile, bug reports such as issues with sentiment displays and semantic search anomalies were met with proactive troubleshooting by experienced users. These discussions, backed by numerous posts and detailed responses, highlight a week of forward-thinking ideas balanced by diligent debugging.
Interesting Topics
AI Helper Confusion (ai, Support): RGJ raised concerns in post 1 about the AI helper mixing up Discourse with Discord, while sam elaborated in post 2 on potential prompt enhancements.
AI Summarization Backfill Update (ai, ai-summarize): Roman_Rizzi confirmed improvements in the summarization process via post 18, highlighting the use of last_posted_at and revision checks.
Discourse AI Plugin Settings Query (official, ai, Plugin):
In post 193 of the “Discourse AI” topic, Bathinda queried the missing setting for topic mentions, prompting further discussion.
Semantic Search Anomaly (ai, ai-search, Support): tyler.lamparter reported challenges with AI semantic search in post 1, noting discrepancies between AI-powered and normal search results.
DeepSeek-R1 Error Resolution (ai, Bug): MachineScholar confirmed in post 4 that DeepSeek-R1 now functions reliably despite intermittent 502/504 errors from external endpoints.
This week on meta.discourse.org the AI community engaged in in‐depth technical discussions and experiments with several new and evolving features. Contributors debated the feasibility of extending RAG to support diverse PDF formats—highlighting nuances in text extraction and OCR reliability (ref, ref)—and launched an intriguing one‐click experiment for AI summarization with mixed impressions (ref, ref). There were spirited conversations around ChatGPT’s role in forum assistance (ref, ref), together with detailed troubleshooting sessions for semantic search issues that consistently returned irrelevant results (ref, ref). Additional topics—including problems with rake tasks in the AI plugin (ref, ref), automation governance experiments (ref, ref), and fixes for persona tool JSON errors (ref, ref)—kept the forum buzzing with ideas and actionable fixes. Throughout these discussions, top contributors like sam, Yenwod, Jagster, Saif and many others drove the conversation forward by sharing clear troubleshooting steps, potential feature improvements, and critical reflections on AI integration in forum functionalities.
Interesting Topics
Will RAG Support PDF Files in the Future?
Users debated the challenges of supporting all types of PDFs in RAG workflows. MachineScholar sparked the discussion with a comment praising the commit (ref), while Saif explained the intricacies of handling text versus image PDFs (ref). Subsequent troubleshooting steps were detailed by Yenwod when he encountered indexing delays (ref) and further debugging was shared (ref, ref, ref).
New Experiment: Enable AI Summarize on your Discourse with one-click!
In this experiment, users evaluated a one-click activation for summarizations. shooj raised an initial query about limiting the feature to admins or mods (ref), and Jagster argued over its universal benefits (ref). Contributions by Arkshine provided CSS-based workarounds (ref), while additional observations by Jagster and Carverofchocie sparked extra debate (ref, ref, ref, ref).
How are we all feeling about ChatGPT and other LLMs and how they’ll impact forums?
The conversation here balanced optimism with caution. Tris20 offered insights on repetitive answers by LLMs in forum settings (ref), while Saif looked at future directions for LLM assistance in topic formulation (ref).
Getting a lot of no results for semantic search
A technical debate emerged when users reported that AI search always returned roughly 40 results regardless of query relevance. sam began the discussion (ref), and tyler.lamparter shared his observations (ref, ref). Further elaboration on model behaviors and result uniformity was provided by sam and Falco (ref, ref, ref, ref).
Rake tasks in the AI plugin not working
A practical issue was raised by Yenwod regarding failures in AI rake tasks, especially with ruby-progressbar related errors (ref). His follow-up post provided more context (ref), while sam and Yenwod further discussed potential remedies (ref, ref).
AI + Automation Governance: Orchestrating Independent AI Triage Scripts
This thread explored automating workflows for AI triage. Cloud_spanner shared a detailed workflow proposal (ref), with sam suggesting a custom tool for triage automation (ref), and Cloud_spanner reinforcing the benefits of a flexible IFTTT-style approach (ref).
AI Persona using Categories tool – “An empty string is not a valid JSON string”
An error in using the Categories tool left users troubleshooting persona failures. markschmucker consistently encountered the JSON error (ref), while sam confirmed the bug and soon provided a fix (ref, ref).
All AI functions are working ok, but AI search gives 500 error
A disruptive 500 error in AI search caught attention on this thread. Bathinda reported the initial error (ref) and subsequent status updates (ref, ref). Jagster guided users through troubleshooting with actionable solutions (ref, ref, ref).
Discourse AI – Spam detection
Focusing on content quality, this topic examined AI’s capacity for spam detection. jordan-violet shared initial testing scenarios (ref, ref), while Jagster critiqued the logical phrasing in the detection rules and offered clearer alternatives (ref, ref, ref, ref).
How to properly debug AI Personas?
A brief yet vital discussion focused on debugging AI personas where Falco inquired about the LLM/provider specifics (ref), and Overgrow responded with the details of his OpenAI setup (ref).
This week on meta.discourse.org (Feb 17–Feb 24, 2025) the AI category has been buzzing with troubleshooting, experimentation, and feature enhancements. Users such as sam, Falco, hendersj, and others have been busy addressing critical bugs, testing integrations with local AI models like Ollama, and exploring new capabilities around PDF processing, search, and API interactions. For example, a serious page display bug raised by shannon1024 quickly attracted responses from Arkshine and sam, while robust troubleshooting in Getting discourse ai to work with ollama locally has yielded progress despite SSRF and streaming configuration challenges. Constant activity in threads discussing the performance of our AI forum helper, hosted model cost clarifications, and even Google Search API integration tests underscores a community deeply invested in making Discourse AI even better. Read on for a more detailed breakdown of the interesting topics and user activities from the past week.
Why is my AI forum helper struggling to answer questions?
In this discussion, sam breaks down common misconceptions about AI performance and walks through the inner workings of the RAG system—clarifying why even flagship models can sometimes underdeliver.
Cost of CDCK Hosted models xeraa and Falco discussed the experimental availability of hosted LLMs and embeddings, emphasizing that these features are currently free aside from fair use constraints. See follow-ups in post 2, post 3, and post 4.
PDF Support and Upload Functionality
Two threads have taken center stage in this area. In PDF support in Discourse AI, sam outlines capabilities for both basic text extraction and enhanced LLM-powered processing. This discussion continues with user excitement in Upload and discuss pdfs in composer, where MachineScholar highlights UI challenges and delayed indexing issues.
shannon1024
As the reporter of a critical page display bug, shannon1024 sparked a chain reaction of fixes and confirmations across multiple posts in that thread.
Overview
This week on meta.discourse.org the community’s AI activity has been energetic and multifaceted. Discussions have spanned innovative feature requests and detailed technical troubleshooting. A central theme has been the need to intelligently automate responses – as introduced in the Auto responder using AI discussion by Saif – with follow‐up examples from EricGT (post 2) and creative suggestions by hel_Sinki (post 3). Simultaneously, practical challenges such as enabling PDF support for AI analysis were examined in PDF support in Discourse AI with details by hameedacpa (post 7) and clarifications from sam (post 8). Meanwhile, integration issues with Claude 3.7 were dissected in the Error using Claude 3.7 Sonnet with Discourse AI plugin thread—fueling technical troubleshooting by emansilla (post 1) and iterative fixes by Falco (post 2). Other threads on LLM settings, experimental search results, and connection resets further enriched the dialogue. Collectively, these discussions underline the community’s commitment to exploring dynamic AI integrations that not only automate tasks but also enhance user experience across Discourse’s features.
PDF support in Discourse AI (#SiteManagement, how-to, ai)
• Discussions centered on testing PDF upload and analysis were led by hameedacpa (post 7) with clarifications and follow-ups by sam (post 8 and post 10), while Falco provided configuration insights (post 12).
Error using Claude 3.7 Sonnet with Discourse AI plugin (Bug, ai, ai-bot)
• Technical hurdles emerged when emansilla reported integration errors (post 1), provoking responses and troubleshooting steps from Falco (posts 2, post 4, and post 6).
Discourse AI - AI bot (#SiteManagement, ai-bot)
• A demonstration by MarkDoerr (post 150) highlighted use cases that drive the AI bot’s interactions.
Discourse AI causing new SSL and Connection Reset by Peer errors (Bug, ai)
• Issues with SSL and connectivity were investigated by oznyet (post 7 and post 9) with remedial updates from sam (posts 8 and 10).
Question about release note for experimental search results (General, ai)
• sam explained fine-tuning efforts (post 3), while Jagster shared community concerns (post 4).
Discourse AI - Large Language Model (LLM) settings page (#SiteManagement, how-to, ai)
• sam reminded users of supported configurations via the LLM settings (post 5).
How do you use Discourse AI? Tell us and make it even better! (Feature, feedback, ai)
• A feedback thread encouraged sharing of use cases, with contributions from Bhack (post 22 and post 24) and an insightful comment by Saif (post 23).
What LLM to use for Discourse AI? (#SiteManagement, how-to, ai)
• In response to user queries, Saif confirmed support for new models (post 5).
Error in any AI Tool with no parameters, e.g. “tags” (Bug, ai)
• sam detailed an error scenario associated with tool parameters (post 6).
Conversational AI Search coming to Discourse AI
Users debated ways to streamline AI search responses—suggesting interactive triggers like a clickable Ask AI button to reduce wait times and enhance conversation continuity.
Rules Surrounding Writing Topics using AI
A spirited discussion emerged about the ethics and quality of fully AI-generated topics, with community members weighing in on taste, accuracy, and moderation practices.
Rebranding the ai spam detection account
Community members clarified that rebranding the out-of-the-box AI spam detection account is safe, as internal operations depend on a fixed user ID.
Discourse AI - Large Language Model (LLM) settings page
This topic hosted technical inquiries about LLM selection, configuration, and cost/quality balance, generating discussion on optimal setups for self-hosted and hosted instances.
Using AI To Tag And Categorize Forum Posts
Members shared early experiments with automating topic categorization and tagging, discussing custom tool integrations and the nuances of silent responders.
Getting a lot of no results for semantic search
Users explored challenges with semantic search, comparing new AI conversational approaches with legacy toggled keyword searches and testing alternative configurations.
Gemini ai bot to draw picture in chat
The creative potential of integrating Gemini’s image generation capabilities into live chat was introduced, sparking excitement about future multimodal AI features.
Discourse AI spam detection replaces Akismet plugin
A crucial announcement led to discussions that clarified the scope of the changes for hosted customers versus self-hosters, as well as questions about language and performance.
Why does the AI model stop responding
Troubleshooting posts examined intermittent model timeouts, with participants sharing logs and suggestions to address the occasional failure in prolonged conversations.
This week on Meta Discourse, the community dove deep into innovations at the intersection of AI and forum management. Members explored experimental implementations such as sam's work on Experiments with AI based moderation on Discourse Meta – where discussions ranged from leveraging Gemini Flash 2.0 (post #7) to addressing false positives (post #3). At the same time, the conversation around new interaction modalities was vibrant, with topics like Is there a way to chat with Discobot? generating feedback from trusktr, tobiaseigen, and Canapin – even prompting interface tweaks such as the redirection fix by Lilly (post #10).
Experiments with AI based moderation on Discourse Meta
Discussions highlighted the promising evolution of AI for moderation, detailing experiments with Gemini Flash 2.0 (post #7) and updates ensuring that only public posts are scanned (post #6).
Is there a way to chat with Discobot?
Users explored the idea of interacting with Discobot in a chat format, with suggestions to use ask.discourse.com (post #3) and commentary on domain redirects (post #9).
What’s the cheapest/best AI to use for AI Spam?
The community weighed cost against performance by comparing models like GPT-4o-mini, Claude 3.5 Haiku, and Gemini 2.0 Flash – with hands-on recommendations from users such as trusktr (post #1) and NateDhaliwal (post #4).
FYI: Cloudflare AI Labyrinth
An insightful share on Cloudflare’s strategy to trap misbehaving bots using AI-generated mazes sparked curious reactions and humorous takes (post #1, post #3).
Hide persona from composer dropdown
A user-friendly debate was held on how to prevent personas from cluttering the composer dropdown, with suggestions for a dedicated “Hide from composer” option (post #1, post #3).
What’s the best way to add image descriptions?
Accessibility improvements were discussed with reference to current markdown capabilities and potential UI enhancements for image ALT text (post #17, post #19).
Setting to manually close AI Helper’s popup modal
With feedback about the finicky nature of the AI Helper modal, a feature request emerged to add a manual close option for enhanced usability (post #1).
Conversational AI Search coming to Discourse AI
Users provided feedback on the new search summary interface, discussing minor UI glitches like missable “More…” text and unexpected closures (post #10).
How to properly debug AI Personas?
Troubleshooting steps and questions about debugging inconsistent AI persona responses spurred a helpful discussion among experienced users (post #4).
Enrich API Calls of AI Plugin?
An enterprise user raised ideas on enriching outgoing API requests with proper authentication headers, opening a conversation on integrating internal auth endpoints with custom tooling (post #1).
This week on meta.discourse.org the community passionately discussed several AI topics, driving the conversation forward on configuration issues, user experience, and new feature experiments. Members collaborated on troubleshooting critical bugs in AI tools—ranging from errors when creating new AI personas to refining popup modal behaviors—while also sharing insights about LLM settings, sentiment analysis, image captioning, embeddings, and even experiments in AI moderation. These discussions, spread across multiple topics such as Unable to create new Personas (post 1), Setting to manually close AI Helper’s popup modal (post 3), LLM settings page (post 10) and Discourse AI - Sentiment (post 41), underscore the shared passion and technical know‐how applied by our members. The blend of bug reports, configuration tweaks, and creative ideas has paved the way for improvements to our AI functionality—ensuring that both self-hosted and managed services continue to run smoothly. For instance, fixes announced by joffreyjaffeux in Unable to create new Personas (post 6) and keegan in Setting to manually close AI Helper’s popup modal (post 3) highlight our concerted effort toward continuous improvement. Below is a detailed roundup of the most interesting topics and user activities from the past 7 days.
Interesting Topics
Unable to create new Personas (Bug, ai, ai-bot) MachineScholar kicked off this discussion by reporting an error on attempting to create new AI personas (post 1). sam joined with troubleshooting suggestions (post 2 and post 4), while further investigation by MachineScholar confirmed the issue (post 3). Later, joffreyjaffeux provided a fix by referencing the upcoming patch (post 6) and MachineScholar confirmed resolution (post 7).
Setting to manually close AI Helper’s popup modal (Bug, ai, ai-helper)
When users expressed the need for more control over AI Helper’s popup modal, keegan swiftly responded by moving the request to Bug and subsequently pushed out a fix (post 3). MachineScholar later confirmed that the issue was resolved (post 5).
Discourse AI - Large Language Model (LLM) settings page (#SiteManagement, how-to, ai)
In this thread, AquaL1te shared their experience with setting up the LLM settings page, noting a quirk with the default model name (post 10 and post 12). Saif contributed by asking for clarifications regarding model identifier discrepancies (post 11 and post 13).
Discourse AI - Sentiment (#SiteManagement, how-to, ai-sentiment, content)
This topic saw RBoy raise concerns about the cessation of the sentiment process since January 2025, prompting discussions regarding configuration changes (post 41). Falco responded with an explanation about the changes in sentiment server deployments (post 42).
Experiments with AI based moderation on Discourse Meta (Community, moderation, ai)
Creative ideas were floated by RGJ, who proposed that an AI moderation bot could intermittently signal that a post required no action—injecting an element of transparency into automated moderation (post 10).
A setting for AI-enabled image title default value (Support, ai) fbpbdmin started this discussion by suggesting that AI-enabled image titles should be off by default in order to save tokens (post 1 and post 4). keegan provided configuration guidance on the available settings (post 3).
Self-Hosting Sentiment and Emotion for DiscourseAI (#Self-Hosting, ai, ai-sentiment)
In a discussion aimed at self-hosters, RBoy raised important questions about running local instances for sentiment analysis, seeking advice on resource requirements and integration techniques (post 4).
Discourse AI - Embeddings (#SiteManagement, ai, ai-search, related-topics)
Lastly, Saif prompted an update in the embeddings discussion to ensure that the feature stays aligned with the platform’s evolving needs (post 37).
Saif
• Engaged actively in the LLM settings discussion with clarifications in post 11 and post 13, and later contributed to enhancing the AI Embeddings feature in post 37.
This week on meta.discourse.org the AI discussion threads have been buzzing with innovative ideas and practical troubleshooting. Community members explored everything from fresh feature proposals to intricate technical issues. For example, the AI Avatar Generator thread saw tpetrov kick off a discussion with ideas for generating personalized avatars (tpetrov post 1, sam post 2, tpetrov post 3), while the lively Ways to Add Knowledge to My Persona discussion featured a deep dive into enabling search and read tools for AI support personas – with contributions from Angela_MRS, Falco, and pfaffman (Angela_MRS post 1, Falco post 2, Angela_MRS post 3, Falco post 4, Angela_MRS post 5, Falco post 6, Angela_MRS post 7, Angela_MRS post 8, pfaffman post 9, plus an extra insight about private categories (Allow Bot on Private Categories)) – ensuring that even “backstage” AI teaching ideas were on the table. Other hot topics included improvements to AI‐based moderation (sam post 11) and extensive discussions on the customization and troubleshooting of the Discourse AI Bot (jorge-gbs post 152 through EricGT post 158). Meanwhile, threads on automation—such as the ability to send AI summaries to groups (sam post 2, jordan-violet post 3)—and resource management via daily AI token limits (sam post 13) highlighted both the potential and the challenges of integrating AI in community workflows. Finally, troubleshooting posts like those on the Invalid Request Error in Google Flash (BrianC post 1 through sam post 7) and API enhancements in the combined discussion on enriching API calls and Google Search integration (sam post 3 in 357898, jorge-gbs post 11 in 307107, sam post 12 in 307107) underscore the collaborative drive to refine Discourse’s AI capabilities.
Interesting Topics
AI Avatar Generator (ai, Feature): tpetrov initiated a discussion to replace static letter avatars with a dynamic, AI‐generated version. The idea was expanded by sam with technical insights (tpetrov post 1, sam post 2, tpetrov post 3).
Experiments with AI-Based Moderation on Discourse Meta (moderation, ai, Community):
A critical update by sam detailed refinements in context handling to prevent irrelevant images, helping to improve the content moderation workflow (sam post 11).
Moderation Tool for Formatting Code with AI (ai, Feature):
In an inventive proposal, sam suggested using a triage system coupled with custom tools to automatically reformat code, reducing the risk of post-destruction (sam post 11 in 347980).
Limit the Number of AI Tokens a User Can Use in a Day? (ai, completed, Feature):
Addressing resource management, sam confirmed that users now receive a notification once their daily AI token limit is hit (sam post 13 in 330500).
AI + Automation Governance: Orchestrating Independent AI Triage Scripts (ai, Support): sam outlined a workable workflow for integrating custom triage tools into the AI moderation process, signaling promising automation updates (sam post 13 in 350716).
Below is a detailed overview of this week’s vibrant discussions and developments around Discourse’s AI features and integrations from 2025-04-07 to 2025-04-14.
Experiments with AI based moderation on Discourse Meta (Community, moderation, ai)
• An experimental approach was shared by sam (post 14) to help moderators guide AI behavior.
Discourse AI - Large Language Model (LLM) settings page (#site-management, how-to, ai)
• Configuration queries were raised by jrgong (post 14), with corrective advice from Falco (post 15).
Discourse AI - Semantic Search & Summarization Not Working for our configuration (Support, ai, ai-summarize)
• Steve_John reported issues (post 1) that were probed further by sam (post 3) and elaborated with troubleshooting suggestions (post 4, post 5).
Activity
wotography led the conversation on privacy, kicking off post 1 in the Concerns over personal privacy with the AI plugin thread, and later contributed further in post 7 and post 10.
Each contribution – from detailed technical analysis to thoughtful user inquiries – has been crucial in shaping the future roadmap of Discourse AI integrations. Community members have built on each other’s ideas by referring to earlier posts like this privacy note or the troubleshooting tip, ensuring a robust and collaborative dialogue.
Thanks for reading, and I’ll see you again next week!
How are you using AI/LLMs to create themes/components/plugins?
In the Dev category with the ai tag, jimkleiber launched a discussion on leveraging AI for Discourse customizations (post 1). Falco soon shared his experience using Cursor in agent mode (post 2), while Dimava noted challenges with bug fixes (post 3). awesomerobot later added practical tips for dealing with modern Ember issues (post 4).
Listing conversations with artificial intelligence on a separate page or filtering them on the messages page
In this Feature discussion tagged ai, kuaza proposed a dedicated page for AI conversations (post 1). The idea received active feedback from Jagster (post 2), with follow-up clarifications and questions further detailed in posts 3, 4, 5, and 6.
Discourse AI - Spam detection
Within #SiteManagement and tagged with moderation, how-to, ai, and spam, users discussed replacing Akismet with an AI-based solution. Olle11 raised questions regarding cost-effective alternatives (post 10), while KhoiUSA recommended Gemini 2.0 Flash with follow-ups in posts 11, 12, and 13.
Building Modular AI Chatbots
In the Support category with tags ai and ai-bot, a conversation unfolded about designing a modular AI chatbot system. Yenwod introduced the concept (post 1), followed by Falco affirming technical feasibility (post 3) and further elaborations from Yenwod (post 4, post 7). Additional insights were provided by Falco (post 5) and sam (post 6).
Self-Hosting Embeddings for DiscourseAI
Under the #Self-Hosting category with tags ai, ai-search, and related-topics, discussions focused on endpoint configuration and backfill procedures. satonotdead reported having an endpoint to use for embeddings (post 21), and Falco clarified that Discourse now handles backfill automatically (post 22).
Need Support for Displaying Reasoning Process and Enabling Grok to Recognize Images
In a Support thread tagged ai, hel_Sinki raised the need for Grok’s visible reasoning traces and improvements so that its image recognition capabilities can properly interact with posts (post 1).
PDF support in Discourse AI
In the #SiteManagement category with tags how-to and ai, Michael_Liu reported issues with PDF uploading and indexing, highlighting error messages and questioning file size limits (post 14 and post 15).
AI Helper stuck generating
A bug report in the Bug category tagged with ai and ai-helper detailed an instance where the AI helper remained stuck on “generating”. MachineScholar initiated the report (post 1), followed by responses from Falco (post 2), further updates from MachineScholar (posts 3 and 4), and a clarification by keegan (post 5).
Prompt tools: funnel, orbit, and flux charts
In a fresh Feature discussion with the tags ai and sql-query, EricGT introduced innovative concepts for prompt evaluation. He outlined how tools like Funnel, Orbit, and Flux could provide deeper insights into prompt performance (post 1).
\u003ca class="mention" href="/u/wlandgraf"\u003ewlandgraf\u003c/a\u003e impulsó un botón de Redactar respuesta para permitir a los moderadores invocar respuestas generadas por IA a demanda, lo que generó sugerencias para vincular la acción a través de personas y la barra de herramientas de IA Usar IA para ayudar a responder nuevas publicaciones en Discourse(Funcionalidad, ai).
\u003ca class="mention" href="/u/wlandgraf"\u003ewlandgraf\u003c/a\u003e preguntó si la carga de muchos archivos de persona aumenta los costos de LLM, y \u003ca class="mention" href="/u/Falco"\u003eFalco\u003c/a\u003e explicó la fórmula “Fragmentos de carga de tokens” × “Fragmentos de conversación de búsqueda” para ajustar los gastos ¿Más archivos de persona aumentan los gastos de las solicitudes de LLM?(Soporte, ai).
\u003ca class="mention" href="/u/Yenwod"\u003eYenwod\u003c/a\u003e y \u003ca class="mention" href="/u/kuaza"\u003ekuaza\u003c/a\u003e iteraron en los prompts para una familia de VaccineBot, mientras que \u003ca class="mention" href="/u/sam"\u003esam\u003c/a\u003e propuso una herramienta JS personalizada para cargar solo el subconjunto relevante de las cargas Creación de chatbots de IA modulares(Soporte, ai-bot).
\u003ca class="mention" href="/u/Moin"\u003eMoin\u003c/a\u003e detectó que el icono de IA oscurecía el menú + en el compositor, y \u003ca class="mention" href="/u/awesomerobot"\u003eawesomerobot\u003c/a\u003e confirmó que la misma corrección se aplica que en un tema UX anterior El icono de IA para el asistente de categoría está posicionado sobre el menú +(UX, ai-helper).
\u003ca class="mention" href="/u/sam"\u003esam\u003c/a\u003e verificó que el campo Nombre de la herramienta ahora aplica validación, cerrando un error de larga data Validar el campo Nombre para herramientas de IA(Error, ai).
\u003ca class="mention" href="/u/wlandgraf"\u003ewlandgraf\u003c/a\u003e solicitó un panel para inspeccionar los scripts y parámetros de todas las herramientas de IA, y \u003ca class="mention" href="/u/Falco"\u003eFalco\u003c/a\u003e aclaró que las herramientas integradas (Ruby) difieren de las herramientas JS personalizadas ¿Podemos ver todas las herramientas y sus configuraciones en IA?(Soporte, ai).
\u003ca class="mention" href="/u/keegan"\u003ekeegan\u003c/a\u003e resolvió que el idioma del asistente de IA del compositor sigue la configuración regional del Sitio, y después de la depuración de PM con \u003ca class="mention" href="/u/MachineScholar"\u003eMachineScholar\u003c/a\u003e, una actualización del núcleo + plugin corrigió el bloqueo de generación de menú El Asistente de IA se queda atascado generando(Error, ai-helper).
\u003ca class="mention" href="/u/Justin_Gonzalez"\u003eJustin_Gonzalez\u003c/a\u003e preguntó sobre la indexación de Discourse a través de Glean, y \u003ca class="mention" href="/u/Falco"\u003eFalco\u003c/a\u003e sugirió usar un user-agent de rastreador o Discourse AI para mostrar contenido oculto Indexación de contenido de la comunidad de Discourse en Glean AI(Soporte, ai).
\u003ca class="mention" href="/u/Jagster"\u003eJagster\u003c/a\u003e informó de alucinaciones generalizadas de la función de explicación, lo que provocó consejos sobre la elección del modelo y ajustes de prompts El asistente de IA alucina mucho al explicar algo(Soporte, ai-helper, ai-custom-prompt).
\u003ca class="mention" href="/u/Falco"\u003eFalco\u003c/a\u003e (9 me gusta, 4 publicaciones)
• Explicó la fórmula de costos en archivos de persona 363440/3?silent=true y 363440/5?silent=true
• Aclaró las diferencias de visibilidad de herramientas 363437/3?silent=true
• Asesoró sobre el enfoque de indexación de Glean 363335/4?silent=true
\u003ca class="mention" href="/u/kuaza"\u003ekuaza\u003c/a\u003e (4 me gusta, 2 publicaciones)
• Celebró la implementación del filtrado de conversaciones de IA 362711/8?silent=true
• Compartió ideas de prompts para VaccineBot 361807/9?silent=true
\u003ca class="mention" href="/u/isaac"\u003eisaac\u003c/a\u003e (4 me gusta, 1 publicación)
• Anunció la fusión de PR para la lista de conversaciones 362711/7?silent=true
\u003ca class="mention" href="/u/keegan"\u003ekeegan\u003c/a\u003e (3 me gusta, 2 publicaciones)
• Investigó el bloqueo del asistente, habilitó la depuración de PM 362643/7?silent=true y 362643/8?silent=true
\u003ca class="mention" href="/u/awesomerobot"\u003eawesomerobot\u003c/a\u003e (3 me gusta, 2 publicaciones)
• Confirmó que aún no hay “Redactar respuesta” integrado 363298/3?silent=true
• Aplicó la corrección UX de un parche de modal anterior 363692/4?silent=true
\u003ca class="mention" href="/u/maiki"\u003emaiki\u003c/a\u003e (3 me gusta, 1 publicación)
• Validó el campo de nombre de herramienta en la UI de administración 330830/3?silent=true
\u003ca class="mention" href="/u/Moin"\u003eMoin\u003c/a\u003e (3 me gusta, 1 publicación)
• Informó del problema de recorte del icono de IA en el compositor 363692/1?silent=true
\u003ca class="mention" href="/u/Yenwod"\u003eYenwod\u003c/a\u003e (2 me gusta, 2 publicaciones)
• Experimentó con las herramientas de VaccineBot 361807/8?silent=true y 361807/11?silent=true
“Este componente temático añade un botón a cada publicación, permitiendo a los usuarios enviar la publicación a un bot de IA para su análisis a través de un mensaje de chat directo.” — Don, Analizador de Publicaciones IA para Chat
La semana pasada (28/04/2025 al 05/05/2025) en #meta.discourse.org estuvo repleta de mejoras de #ia y discusiones profundas:
Don presentó Analizador de Publicaciones IA para Chat (componente temático, chat, ia, bot-ia), que añade un botón de análisis de chat por publicación; merefield solicitó soporte para el plugin Chatbot, y Lilly confirmó el éxito tras ajustes de permisos.
Over the past week, the ai and ai-bot community dove into a variety of Support, Bug, UX, and Feature discussions. From customizing the AI bot homepage to fixing a persistent topic‐summary hang, members collaborated on configuration tricks, merged hotfixes for PDF uploads and image captioning, and ironed out UX quirks like proofread quote handling and AI typing indicators. Under the hood, developers wrestled with LLM performance—troubleshooting a Perplexity Sonar Deep Research setup—and explored advanced automations and web‐search integrations, laying groundwork for richer AI experiences in Discourse.
Topic Summary Hanging (Bug, ai) KhoiUSA reported that “Summarize Topic” hung with a 400 Invalid JSON Payload error in post #1. Falco recommended switching the provider to OpenAI and updating the endpoint, resolving the issue in post #5, and sam suggested adding fallback logic for unexpected non‐JSON responses in post #7.
Proofread Breaks Quotes (UX, ai-helper) Jagster flagged that the Proofread helper mangles quotes in the preview in post #1. Falco clarified it’s a visualization quirk and proposed a diff‐only approach in post #2, and stance455 confirmed quoted text is skipped after the latest update in post #3.
PDF Upload in AI Bot UX (Support, ai-bot) MachineScholar asked if the new AI Bot UX supports text‐based PDFs in post #1, referencing the enhanced PDF processing roadmap. Falco merged a bugfix in post #4 enabling RAG on image‐only PDFs, though full text‐PDF support remains under development.
Perplexity Sonar Deep Research Configuration (Dev, ai) aas encountered 502 errors testing perplexity/sonar-deep-research via OpenRouter in post #1, while sam noted API slowness and high per‐reply costs in post #2.
Disable AI Title Generator in PMs (UX, ai-helper) awesomerobot merged a fix for the AI title generator not being editable in private messages in post #2.
Auto Responder Using AI (Feature, automation) Geraldine_Comiskey sketched out an AI‐powered auto‐responder for unwanted DMs in post #17, sparking ideas for conversational “chatbot” flows.
AI Typing Indicator & Visual Cue (Feature, ai) sam proposed displaying an “AI is typing” placeholder on topic pages to signal live AI responses, extending existing chat presence indicators in post #4.
AI Assistants with Web Search (Support, ai-bot) sam detailed how the OpenAI Responses API supports integrated search and demoed built‐in Google search tooling from the discourse-ai repo, along with a shared conversation demo.
Activity
Below is a summary of contributions by all active users over the past week: