Falco
(Falco)
1 Ottobre 2024, 4:16pm
32
Yes.
Yes, each model produces different vector representations.
It’s basically one call per topic, so very easy to calculate.
If most of your topics are long, they will be truncated to 8k tokens, otherwise they will use your topic length.
Yes.
Overgrow:
I assume that for both related topics and AI-powered search, all posts need to be vectorized only once, so I can calculate the total number of words in posts table and derive the number of tokens needed. The same process would apply to the daily addition of posts. I’m neglecting the search phrases for now.
Both work at the topic level, so one per topic.
2 Mi Piace
RGJ
(Richard - Communiteq)
21 Marzo 2025, 5:31pm
33
It seems that this documentation topic is out of date since this commit, as well as this documentation topic .
4 Mi Piace
Falco
(Falco)
21 Marzo 2025, 5:32pm
34
Indeed. @Saif can you update here?
3 Mi Piace
Saif
(Saif Murtaza )
25 Marzo 2025, 9:46pm
37
The OP has now been updated
1 Mi Piace
May I know how to properly add the gems without forking the plugin with the suggested PR?
I’m trying the scale-to-zero feature on HuggingFace and I just need to use the rake task for backfill embeddings.
jlcoo
(Jiang Long)
7 Luglio 2025, 8:33am
40
why return 418 error code when I using discourse ai embeddings full search in DiscourseAi::Embeddings::EmbeddingsController#search as JSON? Could you help me?