GPT-4o mini landed - going to be supported?

Title says it all – just curious if there will be a plan to support this model and if so, is there a rough estimate on when it could be implemented?

This is a massive improvement for anyone currently using GPT 3.5 Turbo as it’s smarter and over 60% cheaper.

More info about this model on TechCrunch

1 Like
4 Likes

It already works with the Discourse AI plugin, you just need to make sure to use gpt-4o-mini as the name in your LLM configuration:

image

10 Likes

Well this is embarrassing. Thanks boss :grin::robot:
Perhaps I haven’t played around with it enough - note made

2 Likes

We will see how much smarter it is or much better it is to hallucinate :smirk: I remember how much buzz there was when GPT-4o came and now out there is lot of complains.

But good to try.

Bit off topic, but I don’t totally understand why we have two places to tell model and API, settings and LLM-section?

1 Like

That’s a good point. I suspect the hallucination problems couldn’t have improved with this model however I managed to largely mitigate that by putting a bunch of constraints throughout the system prompt – though that has its own downsides of course.

It makes sense only for the manual LLM set up. So I find myself asking the same question :sweat_smile:

The idea is that you setup your LLM 1st and then pick that LLM for the Discourse AI feature of your choosing hence why they are currently in 2 separate places

We have had some internal discussions regarding perhaps having this process all in one place i.e - right when you setup the LLM you can also toggle it on for the specific AI features, like how we have it for AI bot