Discourse AI

Thanks a lot

1 Like

Can we use Deepseek?

3 Likes

Sure, it has an Open AI compatible API

5 Likes

Pls I couldn’t find this setting (also the one shown by you in YOUR screen shot) anywhere in Admin> AI Settings.

1 Like

This is available within Personas, select a specific editable (non system) persona and see below the prompt

2 Likes

4 posts were merged into an existing topic: Will RAG Support PDF Files in the Future?

A post was split to a new topic: Provide visual cue when a topic is recieving an AI response

2 posts were split to a new topic: Ways to add knowledge to my persona

2 posts were split to a new topic: Concerns over personal privacy with the AI plugin

Welcome to Discourse Meta!

You will want to post this in Support

3 Likes

Hi everyone, we are self-hosting vLLM and generate API tokens with Fernet, which have ā€œ=ā€ sign. It looks to me, by checking in /var/discourse/shared/standalone/log/var-log/nginx/error.log, that ā€œ=ā€ sign is replaced with ā€œ%3Dā€ and thus my request is not authorised.
Could this be the case? Could it be resolved?

Thanks for the input.
My case is rather simple. We have already vLLM and openWebUI services exposing LLMs to the world. Both function well. I can also check with simple cURL calls that I can indeed access both from inside the Discourse container:
vLLM:

   curl -v ${LLM_URL} -H "Content-Type: application/json" -H "Authorization: Bearer ${LLM_TOKEN}" \
        -d '{
           "model": "'"${LLM_MODEL}"'",
           "prompt": "'"${TEST_PROMPT}"'",
           "max_tokens": 128,
           "temperature": 0
        }'

openWebUI:

   curl -v ${LLM_URL} -H "Content-Type: application/json" -H "Authorization: Bearer ${LLM_TOKEN}" \
        -d '{
           "model": "'"${LLM_MODEL}"'",
           "messages": [
              { "role": "user",
                "content": "'"${TEST_PROMPT}"'"
              }
           ]
        }'

Now I installed discourse-ai plugin on a self-hosted Discourse and tried to configure access via ā€œLLMsā€->ā€œManual configurationā€->Provider=vLLM. In both cases I have to provide API KEY. Unfortunately.. none works:

vLLM with Fernet token returns ā€œerrorā€:ā€œUnauthorizedā€}
openWebUI returns "{ā€œdetailā€:ā€œNot authenticatedā€}

My suspicious is that Fernet token fails due to ā€œ=ā€ sign converted to ā€œ%3Dā€ but I am even more puzzled with openWebUI because of ā€œNot authenticatedā€, while the token is just a string.

I have no idea how ā€œdiscourse-aiā€ plugin sends token/API_KEY in the case of vLLM but I hope, it is via ā€œBearer ā€ Header.

Any help is welcome or experience in configuring vLLM with API_KEY

Try setting API Provider to OpenAI if you need the API in the bearer token format.

@Falco That worked at least for openWebUI! Thanks a lot!

2 posts were split to a new topic: Best models and prompts for testing Discord search and Discoveries

I have issue related to the output of the AI in my forum
The language of my forum is ā€œArabicā€ so it should be setting to setup the language of the AI output to be fit with the forum language as it is not appropriate to ask for topic summary and all the topic in Arabic and the output is coming to me in ā€œEnglishā€

Tell in the prompts that AI should respond in Arabic. In theory ā€respond using same languageā€ should work too, but it rarely worked for me, in Finnish context with OpenAI models anyway.

So you have a setting and that is system prompt of used AI-agent/person.

It makes sense to use this behavior as default when another language than english is detected?

I’m repeating all personas using spanish, it’s hard to keep them updated with so many (good) changes.


How I can activate this forum researcher (where in the settings) because it is not clear

You can configure it in the Personas tab of the AI plugin.

3 Likes