The Discourse AI plugin has support for requesting emotion/sentiment classification of new posts, which is stored in the database and can be used in reports and admin dashboards.
Discourse AI supports two types of classification, each requiring its own model:
- Sentiment — classifies posts as positive, negative, or neutral (using
cardiffnlp/twitter-roberta-base-sentiment-latest) - Emotion — classifies posts across 28 emotion labels like joy, anger, surprise, etc. (using
SamLowe/roberta-base-go_emotions)
To get both sentiment and emotion data in your dashboards, you need to run both models.
Running with HuggingFace TEI
HuggingFace provides an awesome container image that can get you running quickly.
Sentiment model
mkdir -p /opt/tei-sentiment-cache
docker run --rm --gpus all --shm-size 1g -p 8081:80 \
-v /opt/tei-sentiment-cache:/data \
ghcr.io/huggingface/text-embeddings-inference:latest \
--model-id cardiffnlp/twitter-roberta-base-sentiment-latest
This should get you up and running with a local instance of cardiffnlp/twitter-roberta-base-sentiment-latest, an open model that can classify posts into positive/negative/neutral.
You can check if it’s working with
curl http://localhost:8081/ \
-X POST \
-H 'Content-Type: application/json' \
-d "{ \"inputs\": \"I am happy\" }"
Which should return an array of confidence for each label under normal operation.
Emotion model
To also get emotion classification, run a second container with the emotion model:
mkdir -p /opt/tei-emotion-cache
docker run --rm --gpus all --shm-size 1g -p 8082:80 \
-v /opt/tei-emotion-cache:/data \
ghcr.io/huggingface/text-embeddings-inference:latest \
--model-id SamLowe/roberta-base-go_emotions
Supported models
- cardiffnlp/twitter-roberta-base-sentiment-latest · Hugging Face — sentiment (positive/negative/neutral)
- SamLowe/roberta-base-go_emotions · Hugging Face — emotion (28 emotion labels)
Making it available for your Discourse instance
Most of the time, you will be running this on a dedicated server because of the GPU speed-up. When doing so, I recommend running a reverse proxy, doing TLS termination, and securing the endpoint so it can only be connected by your Discourse instance.
Configuring Discourse AI
Discourse AI includes site settings to configure the inference server for open-source models. You should point it to your server using the ai_sentiment_model_configs setting.
This setting accepts a JSON array of model configurations. Each entry requires:
| Field | Description |
|---|---|
model_name |
The HuggingFace model ID (e.g. cardiffnlp/twitter-roberta-base-sentiment-latest) |
endpoint |
The URL of your TEI instance (e.g. https://your-server:8081) |
api_key |
API key for the endpoint (can be left blank if not required) |
To get both sentiment and emotion dashboards, add an entry for each model you are running. For example, if you’re running both models locally:
- Entry 1: model_name
cardiffnlp/twitter-roberta-base-sentiment-latest, endpointhttps://your-server:8081 - Entry 2: model_name
SamLowe/roberta-base-go_emotions, endpointhttps://your-server:8082
After that, enable the classification by toggling ai_sentiment_enabled.
