Discourse Chatbot šŸ¤–

There’s a PR open to add GPT-5 but there’s something going wrong during CI.

I’ve opened a Dev topic about it.

Has been merged.

If you find GPT-5’s reasoning too slow you can change the reasoning level. There’s a new minimal level now.

Thanks to @NateDhaliwal for his assistance on this one!

2 Likes

Our bot was timing out until we set reasoning to minimal. Thanks!

1 Like

Tbh I’m finding GPT-5 generally too slow with higher reasoning kevels and not obviously worth the additional lead time in responses.

How have you found it for your support bot? Does a minimal reasoning GPT-5 perform better than say 4o or 4.1?

I’ve tried gpt-5 using Chat GPT, that is really much different thing than via API, and it needs that long reasoning time to give slightly better answers what 4o would give, or o1. When it has to answer fast, it is not any better than 4.1.

I’m quite sure the situation is same`ish, or worse because of lack of tools and prompting, when using API. But I don’t know for sure, because gpt-5 is painfully slow and in forum environment it must answer close to speed of light.

1 Like

In terms of content performance, anecdotally, it seems like gpt-5 is giving noticeably better technical answers that gpt-4o. I’m not sure how to quantify that but it really impressed me.

I’m getting varying results in how long it takes to respond. It does seem, from experimenting this morning, like gpt-5 is slower on average but not by too much, and there were some cases where the response came faster with gpt-5. I’m measuring anywhere from 5 seconds to 35 seconds for a reply.

We’re using RAG and I can’t tell what portion of the latency is from the RAG search vs the chat completion. It could be that sometimes it chooses not to RAG search, the search happens faster, or something is cached (in the search or the completion).

We would typically choose better answers over a faster response because giving customers bad technical advice is costly. Up to a point though, if it times out then that’s a very bad user experience.

GPT-5 recommends primarily gpt-5-mini for our use case, and escalate to gpt-5 in some circumstances. Sounds neat but complicated. Have you considered switching between models dynamically? Why doesn’t OpenAI just do that automatically? ChatGPT - Compare GPT models performance

1 Like

We had to switch back to gpt-4o because apparently gpt-5-mini thinks it can do things it can’t do. It confidently offered to set up a customer’s alarm monitoring service for them and connect it to their home alarm equipment. It asked them for equipment ID numbers and hallucinated like it was a concierge setting everything up for them. Our website can do that but the chatbot can’t. It doesn’t seem to be respecting the guardrails in the system prompt like gpt-4o did. We’ll need to tighten it up before we can let people use it.

Update: It turns out that gpt-5 is much better at following instructions and respecting rules in the prompt than gpt-5-mini. If you’re going to let a bot represent your brand, I recommend gpt-5 even though it’s slower and 5x more expensive. There’s too much risk that gpt-5-mini will go off the rails.

1 Like

I have had really good luck with GTP-5-mini in agentic flows via tool calling, code writing and structured data. I generally find structured data is easier for AI apps than unstructured ! .. not what I expected ! but guardrails are easier .. (code-in-loop, human-in-loop, llm-as-judge, etc)

please watch this for blow by blow walkthru of high performance , low cost gpt-5-mini and gpt-4o …

If anyone out there is interested in working structured data capabilities into Discourse as a plugin, etc. Please reach out.

An NLP extension for sql/stats/datascience to Data Explorer is an example.. But could also possibly have a tool / plugin / feature that allows natural language queries of read-only sqlLite or duckdb etc olap files loaded into the container ? just a thought.. :thinking:

Btw, I added GPT 5.1 to the plugin along with some fixes: