Auto-hospedagem de Embeddings para DiscourseAI

I really recommend waiting a couple of weeks until we ship support for configurable embeddings.

This should work fine when we ship configurable embeddings, but out of curiosity what would that bring over GitHub - huggingface/text-embeddings-inference: A blazing fast inference solution for text embeddings models ?