This topic covers the configuration of the Sentiment feature of the Discourse AI plugin.
Required user level: Moderator
Sentiment keeps tabs on your community by analyzing posts and providing sentiment and emotional scores to give you an overall sense of your community for any period of time. These insights can be helpful in determining the type of users posting within your community and interacting with one another.
Features
Overall sentiment: compares the number of posts classified as either positive or negative
Bar graph showcasing toggleable numerical value for positive, negative, and overall scores
Emotion: number of topics and posts classified by multiple emotions, grouped by time frame
Today
Yesterday
Last 7 days
Last 30 days
Reports for any period of time that can be accessed via settings
Yearly
Quarterly
Monthly
Weekly
Custom range
Applicable only for admin users
Enabling Sentiment
Configuration
Sentiment is enabled by default for hosted customers. For manual steps see below
Go to Admin settings-> Plugins → search or find discourse-ai and make sure its enabled
Enable ai_sentiment_enabled for Sentiment Analysis
Head over to /admin/dashboard/sentiment to see their respective reports
Once enabled, Sentiment will classify all posts going forward and from the last 60 days. To classify all of your site’s historical posts, a backfill task must be run from the console.
How is topic/post data processed? How are scores assigned?
Sentiment has a “per post” fidelity. For each post we are able to tell sentiment and then cut that data in many shapes (per tag / category / time etc… ). It compares the number of posts classified as either positive or negative. These are calculated when positive or negative scores > the set threshold score.
Are there any plans to add support for other languages?
In the future Yes! both by adding multilingual simple Machine Learning (ML) models and by using multilingual Large Language Models (LLMs) to classify the data, instead of dedicated models.
The OP has been updated with a new video showcasing the updated features of Sentiment including a ton more emotions and understanding which topics/posts are associated with each emotion
I configured the sentiment model config with a model_name, endpoint, and api_key copied from the LLM settings, where it passes the test, but I get the error below in /logs.
(But maybe I don’t understand correctly, because why doesn’t sentiment use one of the configured LLMs?)
Using claude-3-5-sonnet.
{"type":"error","error":{"type":"invalid_request_error","message":"anthropic-version: header is required"}} (Net::HTTPBadResponse)
/var/www/discourse/plugins/discourse-ai/lib/inference/hugging_face_text_embeddings.rb:71:in `classify'
/var/www/discourse/plugins/discourse-ai/lib/sentiment/post_classification.rb:142:in `request_with'
/var/www/discourse/plugins/discourse-ai/lib/sentiment/post_classification.rb:78:in `block (4 levels) in bulk_classify!'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1593:in `evaluate_to'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/promises.rb:1776:in `block in on_resolvable'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:359:in `run_task'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:350:in `block (3 levels) in create_worker'
<internal:kernel>:187:in `loop'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:341:in `block (2 levels) in create_worker'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:340:in `catch'
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:340:in `block in create_worker'
Status: 400
{"type":"error","error":{"type":"invalid_request_error","message":"anthropic-version: header is required"}} (Net::HTTPBadResponse)
/var/www/discourse/plugins/discourse-ai/lib/inference/hugging_face_text_embeddings.rb:71:in `classify'
/var/www/discourse/plugins/discourse-ai/lib/sentiment/post_classification.rb:142:in `request_wit...
The sentiment module doesn’t use general LLMs, but models specifically fine tuned to sentiment classification. If you want to run those models on your own that is documented at Self-Hosting Sentiment and Emotion for DiscourseAI
@Falco שמתי לב שרגש הפסיק לפעול מאז ינואר 2025. ההערכה שלי היא שיש הגדרה חדשה ai_sentiment_model, כפי שמוסבר בקישור למעלה, שמיועדת להרצת מודל רגש ייעודי משלך/תמונה. שמתי לב שבעדכון התוכנה ה-Discourse עכשיו ai_sentiment_model_configs ריק (האם זה צריך להיות ריק?).
כאשר אני מנסה להריץ את הפקודה rake ai:sentiment:backfill, זה מחזיר לי שגיאה:
rake aborted!
ActiveRecord::StatementInvalid: PG::SyntaxError: שגיאת סינטקס ב- or ליד ")" (ActiveRecord::StatementInvalid)
LINE 1: ...e_upload_id", "posts"."outbound_message_id" מ- () כ- posts..
The plugin defaulted to a server located in DigitalOcean which I put together to make testing easier.
I have since changed the plugin defaults to a clean slate and people who want AI classified need to run servers following the documentation here on Meta.
Indeed, but we were paying that cost for testing purposes. It’s not sustainable to offer that for every self hoster.
Worth mentioning, that we do offer this classification service on GPU accelerated servers as part of our hosting service.
ternal:kernel">:187:in loop' /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:341:in block (2 levels) in create_worker’
/var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:340:in catch' /var/www/discourse/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.5/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb:340:in block in create_worker’
no implicit conversion of Symbol into Integer (TypeError)
/var/www/discourse/plugins/discourse-ai/lib/sentiment/post_classification.rb:163:in block in transform_result' /var/www/discourse/plugins/discourse-ai/lib/sentiment/post_classification.rb:163:in each’
/var/www/discourse/plugin…
---
### **בעיה 2: תצורת מודל Microsoft Azure מובילה לשגיאת Hugging Face**
כשניסיתי לעבור למודל Microsoft Text Analytics בהגדרות Discourse AI, נתקלתי בשגיאת `404 Resource not found`, ומפתיע שהמעקב אחר העקבות עדיין מצביע על `hugging_face_text_embeddings.rb`.
**הודעת שגיאה (מתוך חריגת משימה):**
Job exception: 416 errors
{“error”:{“code”:“404”,“message”: “Resource not found”}} (Net::HTTPBadResponse)
**קטע רלוונטי של מעקב אחר העקבות (מצביע על Hugging Face למרות שנבחר מודל Microsoft):**
**תצפית:** זה מצביע על כך שגם כאשר אני בוחר ומגדיר את נקודת הקצה ואת מפתח ה-API של מודל Microsoft, נראה שהתוסף Discourse AI מקודד בצורה קשיחה או שמנתב באופן שגוי את בקשות ניתוח הסנטימנט דרך לוגיקה או נקודות קצה ספציפיות ל-Hugging Face. זה מונע את השימוש במודל Microsoft בכלל.
---
### **צילומי מסך של תצורה:**
צירפתי צילום מסך של הגדרות Discourse AI שלי כדי להציג את התצורה:
* תצורה מפורטת עבור מודלי סנטימנט AI (המציגה גם את מודלי Hugging Face וגם את מודלי Microsoft) - בדקתי עם תצורות מודל Hugging Face בלבד או Microsoft בלבד עם אותה תוצאה

בעיות אלה הופכות את תכונת ניתוח הסנטימנט לבלתי שמישה למעשה. נראה שהתוסף דורש עדכון כדי לטפל בפורמט התגובה החדש של Hugging Face וכדי לנתב נכון בקשות כאשר ספקי סנטימנט שונים מוגדרים.
כל סיוע או הדרכה בנושאים אלה יתקבלו בברכה רבה.
תודה!
מעניין אותי אם דיווח סנטימנט פועל אצל אחרים, או אם הגדרתי משהו בצורה שגויה. ארצה לדעת מה עוד עלי לבדוק או להגדיר כדי לאפשר דיווח סנטימנט, מכיוון שאני עדיין חווה את אותה הבעיה.
We are trying to use this feature with Azure AI Language (from our self-hosted Discourse instance) - as we are already using our Azure subscription to integrate GPT-4.5 with Discourse (for summarization and chat-bot functionality):
…but we are getting no data in the the sentiment dashboard, and can see these errors in the logs:
Discourse AI: Errors during bulk classification: Failed to classify 208 posts (example ids: 2256, 909, 2270, 2260, 2797) : JSON::ParserError : An empty string is not a valid JSON string.
The backtrace shows that Discourse might be trying to use HuggingFace - are these the only models supported at the moment?
אז, האם הדרך היחידה להשתמש בתכונה זו היא להגדיר מופעים משלך של המודלים (שדורשים מופעי GPU כבדים שיהיו יקרים)? תכונה זו נראית שימושית מאוד, אבל נראה שהיא תעלה לי יותר להגדרה מאשר האירוח הנוכחי שלי ב-Discourse.
Yes, the supported models are the ones listed in the OP.
We will eventually add support for classifying using LLMs for people whose cost isn’t an issue.
Well, the whole feature is build around classifying posts using ML models, so yes, you need somewhere to run those.
And since Discourse can run in the very cheapest VPS out there, running ML models is indeed more expensive. If you wanna have the feature on the cheapest way possible, it is doable to run it on a server with just a handful of CPU cores, as long as you have enough RAM to load the models.