Manual configuration For VLLM

Please, I need help with the manual configuration of VLLM in the AI model section admin/plugins/discourse-ai/ai-llms/new?llmTemplate=none.

Sure, what exactly are you struggling with in there?

I’m unsure about these two options and how to utilize them, specifically in relation to the API.

URL of the service hosting the model

Tokenizer

That is the hostname of the machine where you are running vLLM. It may also work with an IP in the form of http://1.1.1.1:1111 but I have not tested it as such.

We have a few tokenizers to helps us limit the size of the prompts before we send it to the LLM. Pick whatever produces closest results to the tokenizer used by the model you are running in vLLM, it doesn’t really need to be a perfect match.

When in doubt leave it as the OpenAI or Lllama3 one.

OMG, this is too complex for me atm, would go with Sambanova Instead!

1 Like