Self-Hosting Embeddings for DiscourseAI

The Discourse AI plugin has many features that require embeddings to work, such as Related Topics, AI Search, AI Helper Category and Tag suggestion, etc. While you can use a third-party API, like Configure API Keys for OpenAI, Configure API Keys for Cloudflare Workers AI or Configure API Keys for Google Gemini, we built Discourse AI from the first day to not be locked into those.

Running with HuggingFace TEI

HuggingFace provides an awesome container image that can get you running quickly.

For example:

mkdir -p /opt/tei-cache
docker run --rm --gpus all --shm-size 1g -p 8081:80 \
  -v /opt/tei-cache:/data \
  ghcr.io/huggingface/text-embeddings-inference:latest \
  --model-id BAAI/bge-large-en-v1.5

This should get you up and running with a local instance of BAAI/bge-large-en-v1.5, a very good performing open-source model.

You can check if it’s working with

curl http://localhost:8081/ \
    -X POST \
    -H 'Content-Type: application/json' \
    "{ \"inputs\": \"Testing string for embeddings\" }"

Which should return an array of floats under normal operation.

Making it available for your Discourse instance

Most of the time, you will be running this on a dedicated server because of the GPU speed-up. When doing so, I recommend running a reverse proxy, doing TLS termination, and securing the endpoint so it can only be connected by your Discourse instance.

Configuring DiscourseAI

Discourse AI includes site settings to configure the inference server for open-source models. You should point it to your server using the ai_hugging_face_tei_endpoint setting.

After that, change the embeddings model setting to point to the model you are using at ai_embeddings_model.

11 Likes

The model bge-m3 should work for multilingual (or not english) sites?

Yes, I played with it the week it got silently shared on GitHub and it works well. Still waiting to see how it lands on the MTEB leaderboars, as it wasn’t there last I looked.

That said we have large hosted Discourse instances using the multilingual the plugin ships, e5, and it performs very well.

1 Like

Thanks, did you have plans to enable open-source custom endpoints for embeds? I’m trying to use this models on Huggingface.

Sorry I don’t understand what you are trying to convey here. This topic is a guide on how to run open-source models for Discourse AI embeddings.

Oh, sorry about that. I’m trying to use an open-source model from HuggingFace custom endpooint and I wonder if that’s possible or it’s on the plans to enable at near future :slight_smile:

To check if it’s working, the following command works for me (with BAAI/bge-m3 model):

curl -X 'POST' \
  'http://localhost:8081/embed'\
  -H 'Content-Type: application/json' \
  -d '{ "inputs": "Testing string for embeddings"}'

BTW, you can also use the Swagger web interface at http://localhost:8081/docs/.

2 Likes

This is also a nice embeddings server:

1 Like

To save space, is it possible to use quantized embeddings? I’d like to use binary quantized embeddings to really cut down the storage size. Having done some tests, I get >90% performance with 32x less storage!

1 Like

We are storing embeddings using half precision (half storage space) and using binary quantization for indexes (32x smaller) by default as of a few weeks ago, so just updating your site to latest should give you ample disk usage reduction.

2 Likes

Could you please also add:

to the supported embedding models?

We plan on making embeddings configurable the same way we did with LLMs, so any model will be compatible soon.

3 Likes

If anyone else has problems with endpoints on the local network e.g. 192.168.x.x - it seems these are blocked by discourse (presumably for security reasons) and the block needs to be bypassed. Lost some hours figuring that one out!

1 Like

@Falco that would be great. In the interim, if I wanted to have a stab at adding in a new embedding model, do I just need to add:

 lib/embeddings/vector_representations/mxbai-embed-xsmall-v1.rb
 lib/tokenizer/mxbai-embed-xsmall-v1.rb
 tokenizers/mxbai-embed-xsmall-v1.json

and modify lib/embeddings/vector_representations/base.rb to include the new model, or is there something else I need to change too?

@Falco I tried my hand at adding the model and sent a pull request. Apologies if I did something wrong as I’m not really a SW developer. I hoped you could maybe look over it and see if it is OK for inclusion.

Unfortunately, I was not able to get it working with TEI. I could get the all-mpnet working with TEI, but I think there’s something wrong with what I have done to get mxbai working.

BTW, any chance of supporting https://github.com/michaelfeil/infinity as an embedding server?

EDIT: I see this is going to be messy as the HNSW indexes in the database seem to be hardcoded so new models need to be appended at the end to avoid disrupting the ordering and each new model needs to add its own index.

I really recommend waiting a couple of weeks until we ship support for configurable embeddings.

This should work fine when we ship configurable embeddings, but out of curiosity what would that bring over GitHub - huggingface/text-embeddings-inference: A blazing fast inference solution for text embeddings models ?

I haven’t kept up with TEI so won’t mention the advantages that I haven’t tested recently, but of then things I saw recently:

  • Hardware support: infinity has better GPU support than TEI
  • infinity server can host multiple embedding models in a single server (unless I missed this in TEI)

It’s very nice. If you haven’t tried it, you should take a look!

1 Like