What inference server are you using? vLLM?
When configuring the URL, add the path /v1/chat/completions at the end.
/v1/chat/completions