Debugging adding new LLM

What inference server are you using? vLLM?

When configuring the URL, add the path /v1/chat/completions at the end.