Self-Hosting Sentiment and Emotion for DiscourseAI

The Discourse AI plugin has support for requesting emotion/sentiment classification of new posts, which is stored in the database and can be used in reports and admin dashboards.

Running with HuggingFace TEI

HuggingFace provides an awesome container image that can get you running quickly.

For example:

mkdir -p /opt/tei-cache
docker run --rm --gpus all --shm-size 1g -p 8081:80 \
  -v /opt/tei-cache:/data \
  ghcr.io/huggingface/text-embeddings-inference:latest \
  --model-id cardiffnlp/twitter-roberta-base-sentiment-latest \
  --revision refs/pr/30

This should get you up and running with a local instance of ‘cardiffnlp/twitter-roberta-base-sentiment-latest’, an open model that can classify posts into positive/negative/neutral.

You can check if it’s working with

curl http://localhost:8081/ \
    -X POST \
    -H 'Content-Type: application/json' \
    "{ \"inputs\": \"I am happy\" }"

Which should return an array of confidence for each label under normal operation.

Supported models

Making it available for your Discourse instance

Most of the time, you will be running this on a dedicated server because of the GPU speed-up. When doing so, I recommend running a reverse proxy, doing TLS termination, and securing the endpoint so it can only be connected by your Discourse instance.

Configuring Discourse AI

Discourse AI includes site settings to configure the inference server for open-source models. You should point it to your server using the ai_sentiment_model_configs setting.

After that, enable the classification by toggling ai_sentiment_enabled.

1 Like

Are there plans to support other models in languages ​​other than English?

1 Like

@Falco if one decides to run this on the same server running discourse (e.g. we have a very small deployment with a few thousand posts), could you update the instructions to outline

  1. How can discourse integrate with a local instance of HuggingFace TEI container image
  2. Suggestions on how much additional RAM/disk is required to run the above (e.g. if the base Discourse is running on 2GB RAM with 20GB disk)

So I have setup a new self hosted Discourse instance and trying to setup sentiments. This is my ai_sentiment_model_configs -

Key Value
model name cardiffnlp/twitter-roberta-base-sentiment-latest
endpoint https://my_own_instance
api_key [blank]

And it works, sort of, I get the sentiment bar graph.

However, the Emotion table is empty. This doc looks incomplete, or poorly worded for me to grasp what needs to be done.

Do I run another Docker container, with a different model ID (roberta-base-go_emotions?), or something else? What do I need to do get that emotion table filled up?

Would prefer to self host these services if possible. TIA if anybody can point me in the right direction.

For emotions you need to run the

too.

1 Like

Thank you. So I just run a second docker container with some tweaks, like so:

mkdir -p /opt/tei-cache2
docker run --detach --rm --gpus all --shm-size 1g -p 8082:80 \
  -v /opt/tei-cache2:/data \
  ghcr.io/huggingface/text-embeddings-inference:latest \
  --model-id SamLowe/roberta-base-go_emotions

and add a new entry in ai_sentiment_model_configs and it’s all working now. Thank you. :slight_smile: