Self-Hosting Embeddings for DiscourseAI

Плагин Discourse AI обладает множеством функций, требующих для работы векторных представлений (embeddings), таких как «Связанные темы», «Поиск с помощью ИИ», «Помощник ИИ» и предложения категорий и тегов. Хотя вы можете использовать сторонний API, например, настроить ключи API для OpenAI, настроить ключи API для Cloudflare Workers AI или настроить ключи API для Google Gemini, мы создали Discourse AI с первого дня так, чтобы не быть зависимыми от этих сервисов.

Запуск с использованием HuggingFace TEI

HuggingFace предоставляет отличный контейнерный образ, который позволит вам быстро запустить сервис.

Например:

mkdir -p /opt/tei-cache
docker run --rm --gpus all --shm-size 1g -p 8081:80 \
  -v /opt/tei-cache:/data \
  ghcr.io/huggingface/text-embeddings-inference:latest \
  --model-id BAAI/bge-large-en-v1.5

Это позволит вам запустить локальный экземпляр модели BAAI/bge-large-en-v1.5 — очень эффективной модели с открытым исходным кодом.

Вы можете проверить, работает ли он, выполнив команду:

curl -X POST \
  'http://localhost:8081/embed' \
  -H 'Content-Type: application/json' \
  -d '{ "inputs": "Testing string for embeddings" }'

При нормальной работе должен быть возвращён массив чисел с плавающей запятой.

Доступность для вашего экземпляра Discourse

Чаще всего этот сервис будет запускаться на выделенном сервере из-за ускорения работы GPU. В таком случае рекомендуется использовать обратный прокси-сервер, завершать TLS-соединения и обеспечить безопасность конечной точки, чтобы к ней мог подключаться только ваш экземпляр Discourse.

Настройка DiscourseAI

Discourse AI теперь использует полностью настраиваемую систему определения векторных представлений, аналогичную настройке больших языковых моделей (LLM). Чтобы настроить ваш самохостинговый конечный пункт:

  1. Перейдите в Администрирование → Плагины → Discourse AI → Векторные представления (Embeddings).
  2. Нажмите Создать, чтобы создать новое определение векторных представлений.
  3. Выберите шаблон, соответствующий вашей модели (например, bge-large-en, bge-m3 или multilingual-e5-large), или выберите Настроить вручную для любой другой модели.
  4. Укажите URL, ведущий на ваш самохостинговый сервер TEI (например, https://your-tei-server:8081).
  5. Используйте кнопку Тест, чтобы проверить подключение перед сохранением.
  6. После сохранения установите параметр ai_embeddings_selected_model на ваше новое определение векторных представлений.

После настройки Discourse автоматически заполнит векторные представления для существующих тем с помощью запланированной фоновой задачи. Если у вас большая очередь, вы можете увеличить скрытый параметр ai_embeddings_backfill_batch_size (по умолчанию: 250), чтобы обрабатывать темы быстрее.

11 лайков

The model bge-m3 should work for multilingual (or not english) sites?

Yes, I played with it the week it got silently shared on GitHub and it works well. Still waiting to see how it lands on the MTEB leaderboars, as it wasn’t there last I looked.

That said we have large hosted Discourse instances using the multilingual the plugin ships, e5, and it performs very well.

1 лайк

Thanks, did you have plans to enable open-source custom endpoints for embeds? I’m trying to use this models on Huggingface.

Sorry I don’t understand what you are trying to convey here. This topic is a guide on how to run open-source models for Discourse AI embeddings.

Oh, sorry about that. I’m trying to use an open-source model from HuggingFace custom endpooint and I wonder if that’s possible or it’s on the plans to enable at near future :slight_smile:

To check if it’s working, the following command works for me (with BAAI/bge-m3 model):

curl -X 'POST' \
  'http://localhost:8081/embed'\
  -H 'Content-Type: application/json' \
  -d '{ "inputs": "Testing string for embeddings"}'

BTW, you can also use the Swagger web interface at http://localhost:8081/docs/.

2 лайка

This is also a nice embeddings server:

1 лайк

To save space, is it possible to use quantized embeddings? I’d like to use binary quantized embeddings to really cut down the storage size. Having done some tests, I get >90% performance with 32x less storage!

1 лайк

We are storing embeddings using half precision (half storage space) and using binary quantization for indexes (32x smaller) by default as of a few weeks ago, so just updating your site to latest should give you ample disk usage reduction.

3 лайка

Could you please also add:

to the supported embedding models?

We plan on making embeddings configurable the same way we did with LLMs, so any model will be compatible soon.

4 лайка

If anyone else has problems with endpoints on the local network e.g. 192.168.x.x - it seems these are blocked by discourse (presumably for security reasons) and the block needs to be bypassed. Lost some hours figuring that one out!

1 лайк

@Falco that would be great. In the interim, if I wanted to have a stab at adding in a new embedding model, do I just need to add:

 lib/embeddings/vector_representations/mxbai-embed-xsmall-v1.rb
 lib/tokenizer/mxbai-embed-xsmall-v1.rb
 tokenizers/mxbai-embed-xsmall-v1.json

and modify lib/embeddings/vector_representations/base.rb to include the new model, or is there something else I need to change too?

@Falco I tried my hand at adding the model and sent a pull request. Apologies if I did something wrong as I’m not really a SW developer. I hoped you could maybe look over it and see if it is OK for inclusion.

Unfortunately, I was not able to get it working with TEI. I could get the all-mpnet working with TEI, but I think there’s something wrong with what I have done to get mxbai working.

BTW, any chance of supporting https://github.com/michaelfeil/infinity as an embedding server?

EDIT: I see this is going to be messy as the HNSW indexes in the database seem to be hardcoded so new models need to be appended at the end to avoid disrupting the ordering and each new model needs to add its own index.

I really recommend waiting a couple of weeks until we ship support for configurable embeddings.

This should work fine when we ship configurable embeddings, but out of curiosity what would that bring over GitHub - huggingface/text-embeddings-inference: A blazing fast inference solution for text embeddings models ?

I haven’t kept up with TEI so won’t mention the advantages that I haven’t tested recently, but of then things I saw recently:

  • Hardware support: infinity has better GPU support than TEI
  • infinity server can host multiple embedding models in a single server (unless I missed this in TEI)

It’s very nice. If you haven’t tried it, you should take a look!

1 лайк

A friend just DM’ed me this thread.

Some Pro/Con’s:

  • infinity supports multi-modal embeddings (aka send images/audio) to the
  • amd gpu support
  • multiple models supported in the same container (control the model via model param).
  • more dtypes e.g. int8 quantization of the weights (mostly this is irrelevant, activation memory is larger)
  • new models often come out via “custom modeling code” shipped in the huggingface repo. Infinity reads this pytorch code if needed. This will help you avoid “can you support xyz models” on a ongoing basis)
  • more models supported (e.g. debertav2 for mixedbread)

Cons:

  • cold start time of TEI is better
3 лайка

Hi Michael :wave:

@roman has been busy restructuring our embedding config at:

We should be done very very soon, once that is done adding support for inifinity should be trivial.

I still think a lot about multi model embedding, it gives you a shortcut when trying to do RAG on PDFs cause you just process it into images and embed each image avoiding need for OCR or expensive Image to text powered by LLM.

Once we get this PR done we will be more than happy to add infinity support (and multi model support) into the embedding config.

Thanks for popping in :hugs:

4 лайка

I wonder whether building litellm support might offer a shortcut as then you benefit from all the models supported via litellm. Other projects see to embed this.