Discourse AI с локальным Ollama: внутренняя ошибка сервера

I’ve found out that the model was the problem. Using qwen2.5:3b with Groq as provider and QwenTokenizer makes the test success.

3 лайка