Thanks a lot
Can we use Deepseek?
Sure, it has an Open AI compatible API
Pls I couldnāt find this setting (also the one shown by you in YOUR screen shot) anywhere in Admin> AI Settings.
This is available within Personas, select a specific editable (non system) persona and see below the prompt
4 posts were merged into an existing topic: Will RAG Support PDF Files in the Future?
A post was split to a new topic: Provide visual cue when a topic is recieving an AI response
2 posts were split to a new topic: Ways to add knowledge to my persona
2 posts were split to a new topic: Concerns over personal privacy with the AI plugin
Welcome to Discourse Meta!
You will want to post this in Support
Hi everyone, we are self-hosting vLLM and generate API tokens with Fernet, which have ā=ā sign. It looks to me, by checking in /var/discourse/shared/standalone/log/var-log/nginx/error.log, that ā=ā sign is replaced with ā%3Dā and thus my request is not authorised.
Could this be the case? Could it be resolved?
Thanks for the input.
My case is rather simple. We have already vLLM and openWebUI services exposing LLMs to the world. Both function well. I can also check with simple cURL calls that I can indeed access both from inside the Discourse container:
vLLM:
curl -v ${LLM_URL} -H "Content-Type: application/json" -H "Authorization: Bearer ${LLM_TOKEN}" \
-d '{
"model": "'"${LLM_MODEL}"'",
"prompt": "'"${TEST_PROMPT}"'",
"max_tokens": 128,
"temperature": 0
}'
openWebUI:
curl -v ${LLM_URL} -H "Content-Type: application/json" -H "Authorization: Bearer ${LLM_TOKEN}" \
-d '{
"model": "'"${LLM_MODEL}"'",
"messages": [
{ "role": "user",
"content": "'"${TEST_PROMPT}"'"
}
]
}'
Now I installed discourse-ai plugin on a self-hosted Discourse and tried to configure access via āLLMsā->āManual configurationā->Provider=vLLM. In both cases I have to provide API KEY. Unfortunately.. none works:
vLLM with Fernet token returns āerrorā:āUnauthorizedā}
openWebUI returns "{ādetailā:āNot authenticatedā}
My suspicious is that Fernet token fails due to ā=ā sign converted to ā%3Dā but I am even more puzzled with openWebUI because of āNot authenticatedā, while the token is just a string.
I have no idea how ādiscourse-aiā plugin sends token/API_KEY in the case of vLLM but I hope, it is via āBearer ā Header.
Any help is welcome or experience in configuring vLLM with API_KEY
Try setting API Provider to OpenAI
if you need the API in the bearer token format.
@Falco That worked at least for openWebUI! Thanks a lot!
2 posts were split to a new topic: Best models and prompts for testing Discord search and Discoveries
I have issue related to the output of the AI in my forum
The language of my forum is āArabicā so it should be setting to setup the language of the AI output to be fit with the forum language as it is not appropriate to ask for topic summary and all the topic in Arabic and the output is coming to me in āEnglishā
Tell in the prompts that AI should respond in Arabic. In theory ārespond using same languageā should work too, but it rarely worked for me, in Finnish context with OpenAI models anyway.
So you have a setting and that is system prompt of used AI-agent/person.
It makes sense to use this behavior as default when another language than english is detected?
Iām repeating all personas using spanish, itās hard to keep them updated with so many (good) changes.
You can configure it in the Personas tab of the AI plugin.