We will see how much smarter it is or much better it is to hallucinate I remember how much buzz there was when GPT-4o came and now out there is lot of complains.
But good to try.
Bit off topic, but I don’t totally understand why we have two places to tell model and API, settings and LLM-section?
That’s a good point. I suspect the hallucination problems couldn’t have improved with this model however I managed to largely mitigate that by putting a bunch of constraints throughout the system prompt – though that has its own downsides of course.
It makes sense only for the manual LLM set up. So I find myself asking the same question
The idea is that you setup your LLM 1st and then pick that LLM for the Discourse AI feature of your choosing hence why they are currently in 2 separate places
We have had some internal discussions regarding perhaps having this process all in one place i.e - right when you setup the LLM you can also toggle it on for the specific AI features, like how we have it for AI bot