Discourse AI with local ollama Internal Server Error

I’ve found out that the model was the problem. Using qwen2.5:3b with Groq as provider and QwenTokenizer makes the test success.

3 Likes