Support for Mistral API

I’ve been playing with the new Mistral models (mistral-small and mistral-medium) via their API and like what I’m seeing. Mistral-small (which is Mixtral 8x7B MOE), appears to outperform GPT3.5 and could be a viable alternative (even if not quite as capable) to GPT4 for some use cases. Most importantly, it’s significantly cheaper than GPT4. Would love to see support for the Mistral API in the Discourse AI plugin.

I tried dropping the Mistral API URL and token into the openAI GPT4 turbo fields, but not surprisingly that didn’t work because it was requesting the wrong model. Looking at the API documentation for openAI, Anthropic and Mistral, they all use the same format. I imagine this is on purpose - new vendors aligning to openAI so that they can be used as drop-in replacements.

So part 2 of this feature request is to consider refactoring the AI settings to be more generic to accommodate any vendor that adopts an openAI-like interface. In the settings, simply provide 4 things: the chat endpoint, the model list endpoint, the embeddings endpoint, and the API key. Discourse then queries the /models endpoint to fetch the names, and/or you can type in model names manually.

@Falco just landed mixtral today via vLLM.

@Roman_Rizzi is working on refactoring our internal implementation so bot can lean on our new “generic LLM” interface.

I agree about rethinking all AI settings when it comes to LLMs, the patterns we need outgrow what you can do in site settings.

A new interface listing all LLMs you have access to, explaining endpoints / params / quotas and more is very much something we need to start thinking about. The current way is just a bit too limited and given there are infinity models out there, we need a new paradigm here.

3 Likes