Thereās a PR open to add GPT-5 but thereās something going wrong during CI.
Iāve opened a Dev topic about it.
Thereās a PR open to add GPT-5 but thereās something going wrong during CI.
Iāve opened a Dev topic about it.
Has been merged.
If you find GPT-5ās reasoning too slow you can change the reasoning level. Thereās a new minimal level now.
Thanks to @NateDhaliwal for his assistance on this one!
Our bot was timing out until we set reasoning to minimal. Thanks!
Tbh Iām finding GPT-5 generally too slow with higher reasoning kevels and not obviously worth the additional lead time in responses.
How have you found it for your support bot? Does a minimal reasoning GPT-5 perform better than say 4o or 4.1?
Iāve tried gpt-5 using Chat GPT, that is really much different thing than via API, and it needs that long reasoning time to give slightly better answers what 4o would give, or o1. When it has to answer fast, it is not any better than 4.1.
Iām quite sure the situation is same`ish, or worse because of lack of tools and prompting, when using API. But I donāt know for sure, because gpt-5 is painfully slow and in forum environment it must answer close to speed of light.
In terms of content performance, anecdotally, it seems like gpt-5 is giving noticeably better technical answers that gpt-4o. Iām not sure how to quantify that but it really impressed me.
Iām getting varying results in how long it takes to respond. It does seem, from experimenting this morning, like gpt-5 is slower on average but not by too much, and there were some cases where the response came faster with gpt-5. Iām measuring anywhere from 5 seconds to 35 seconds for a reply.
Weāre using RAG and I canāt tell what portion of the latency is from the RAG search vs the chat completion. It could be that sometimes it chooses not to RAG search, the search happens faster, or something is cached (in the search or the completion).
We would typically choose better answers over a faster response because giving customers bad technical advice is costly. Up to a point though, if it times out then thatās a very bad user experience.
GPT-5 recommends primarily gpt-5-mini for our use case, and escalate to gpt-5 in some circumstances. Sounds neat but complicated. Have you considered switching between models dynamically? Why doesnāt OpenAI just do that automatically? ChatGPT - Compare GPT models performance
We had to switch back to gpt-4o because apparently gpt-5-mini thinks it can do things it canāt do. It confidently offered to set up a customerās alarm monitoring service for them and connect it to their home alarm equipment. It asked them for equipment ID numbers and hallucinated like it was a concierge setting everything up for them. Our website can do that but the chatbot canāt. It doesnāt seem to be respecting the guardrails in the system prompt like gpt-4o did. Weāll need to tighten it up before we can let people use it.
Update: It turns out that gpt-5 is much better at following instructions and respecting rules in the prompt than gpt-5-mini. If youāre going to let a bot represent your brand, I recommend gpt-5 even though itās slower and 5x more expensive. Thereās too much risk that gpt-5-mini will go off the rails.
I have had really good luck with GTP-5-mini in agentic flows via tool calling, code writing and structured data. I generally find structured data is easier for AI apps than unstructured ! .. not what I expected ! but guardrails are easier .. (code-in-loop, human-in-loop, llm-as-judge, etc)
please watch this for blow by blow walkthru of high performance , low cost gpt-5-mini and gpt-4o ā¦
If anyone out there is interested in working structured data capabilities into Discourse as a plugin, etc. Please reach out.
An NLP extension for sql/stats/datascience to Data Explorer is an example.. But could also possibly have a tool / plugin / feature that allows natural language queries of read-only sqlLite or duckdb etc olap files loaded into the container ? just a thought.. ![]()
Btw, I added GPT 5.1 to the plugin along with some fixes: