We will see how much smarter it is or much better it is to hallucinate I remember how much buzz there was when GPT-4o came and now out there is lot of complains.
But good to try.
Bit off topic, but I don’t totally understand why we have two places to tell model and API, settings and LLM-section?
That’s a good point. I suspect the hallucination problems couldn’t have improved with this model however I managed to largely mitigate that by putting a bunch of constraints throughout the system prompt – though that has its own downsides of course.
It makes sense only for the manual LLM set up. So I find myself asking the same question