Self-Hosting Embeddings for DiscourseAI

Hi Michael :wave:

@roman has been busy restructuring our embedding config at:

We should be done very very soon, once that is done adding support for inifinity should be trivial.

I still think a lot about multi model embedding, it gives you a shortcut when trying to do RAG on PDFs cause you just process it into images and embed each image avoiding need for OCR or expensive Image to text powered by LLM.

Once we get this PR done we will be more than happy to add infinity support (and multi model support) into the embedding config.

Thanks for popping in :hugs:

3 Likes