We have customers doing thousands of AI calls daily through OpenAI compatible proxies, so we know it works. The main issue is when they say “OpenAI compatible” is how compatible they really are.
vLLM, Google, Ollama, LMStudio all provide OpenAI compatible APIs that we test and use daily.
Of a specific provider is failing, it’s usually easy to find why via the logs in the /logs page. Can you share the error from there?