The Discourse AI plugin has many features that require embeddings to work, such as Related Topics, AI Search, AI Helper Category and Tag suggestion, etc. While you can use a third-party API, like Configure API Keys for OpenAI, Configure API Keys for Cloudflare Workers AI or Configure API Keys for Google Gemini, we built Discourse AI from the first day to not be locked into those.
Running with HuggingFace TEI
HuggingFace provides an awesome container image that can get you running quickly.
For example:
mkdir -p /opt/tei-cache
docker run --rm --gpus all --shm-size 1g -p 8081:80 \
-v /opt/tei-cache:/data \
ghcr.io/huggingface/text-embeddings-inference:latest \
--model-id BAAI/bge-large-en-v1.5
This should get you up and running with a local instance of BAAI/bge-large-en-v1.5, a very good performing open-source model.
You can check if it’s working with
curl http://localhost:8081/ \
-X POST \
-H 'Content-Type: application/json' \
"{ \"inputs\": \"Testing string for embeddings\" }"
Which should return an array of floats under normal operation.
Making it available for your Discourse instance
Most of the time, you will be running this on a dedicated server because of the GPU speed-up. When doing so, I recommend running a reverse proxy, doing TLS termination, and securing the endpoint so it can only be connected by your Discourse instance.
Configuring DiscourseAI
Discourse AI includes site settings to configure the inference server for open-source models. You should point it to your server using the ai_hugging_face_tei_endpoint setting.
After that, change the embeddings model setting to point to the model you are using at ai_embeddings_model.